7 Workshops in 1 Day

Workshops are one of the most stimulative and interactive way to learn more about Python-related skills in-depth. The workshop's ambition is to keep shaping a community of interest in Python language. Early career professionals as well as graduate students will benefit from the introductory-level material on these important topics and tools. Mid-career professionals can also benefit from the workshops to extend their horizons.

We start PyCon APAC with 1 full day of different workshops from instructors around the world. These are great since the class sizes are small, and you can ask the instructors questions anytime you encounter any difficulties.

Each workshop costs 1,000 baht per person. PyCon APAC strives to ensure that education is accessible to all. Actual value of each workshop cost a lot more. We thank you all instructors for their time and effort. Kindly fill up 1 workshop per form. Note: Moderators do not receive any remuneration. All remaining profits from workshops will be donated to Python Software Foundation.


Workshop Session 1

Training and Deploying your custom Machine Learning Models in Amazon SageMaker using Docker containers

All participants will receive a copy of Joshua Arvin Lat's new book: Machine Learning with Amazon SageMaker cookbook

Joshua-Arvin-Lat-workshop

ABOUT JOSHUA

Instructor Joshua Arvin Lat
Max Attendance 50
Level All
Requirements Browser and Internet Connection
Date November 19, 2021
Time 10:00 - 12:00 (ICT)
Duration 120 mins

Description

Three parts: - Getting started with using custom machine learning models in SageMaker (basic) - Building your own custom ML container image (intermediate) - MLOps / ML Pipelines (intermediate)


N|Solid


Workshop Session 2

Stock Portfolio Optimization for Beginner Investors using Python

Yevonnael-Andrew

ABOUT YEVONNAEL

Instructor Yevonnael Andrew
Max Attendance 30
Level Intermediate
Requirements -
Date November 19, 2021
Time 10:00 - 12:00 (ICT)
Duration 120 mins

Description

All investors, from large investors to the smallest individual investors, have a common problem with investing: how to decide where to invest, how much amount to invest, and how much risk to take? There are so many early investors when investing, do not use the required analytical techniques, this is due to the lack of sufficient fundamentals.

In this tutorial, I’ll cover the minimum theory you’ll need as well as the Python skills you’ll need to do portfolio analysis. So that after this session, you can immediately start your own portfolio analysis so that your investment level will be more optimal.

I won’t go into much of the math formulas, as Python will do it, but I’ll give a conceptual one with relevant examples. I will give a stock optimization method, namely the Markowitz model, where this model is essentially looking for a combination of stocks that has the highest ratio of returns and the lowest volatility.

In this demonstration, I will conduct an investment simulation of stocks in a portfolio. The results of this simulation will be compared to an unoptimized portfolio (distribute the funds equal to 1/n to n types of shares). From the simulation results, we can see that with just a simple optimization using Python, we can produce a more optimal portfolio.

N|Solid

Workshop Session 3

Pipelines 4 All: From Data Engineering to Machine Learning

Ramon-Perez

ABOUT RAMON

Instructor Ramon Perez
Max Attendance 50
Level Intermediate
Requirements Prerequisites ( P ) and Good To Have’s ( GTH ) ;
1. ( P ) Attendees for this tutorial are expected to be familiar with Python (1 year of coding); 2. ( P ) Participants should be comfortable with loops, functions, lists comprehensions, and if-else statements; 3. ( GTH ) While it is not necessary to have knowledge of Prefect, MLFlow, Scikit-Learn; XGBoost, FastAPI, pandas, and the HoloViz suite of libraries, a bit of experience with these libraries would be very beneficial throughout this tutorial; 4. ( P ) Participants should have at least 5 GB of free space in their computers; 5. ( GTH ) While it is not required to have experience with an integrated development environment like Jupyter Lab, this would be very beneficial for the session.
Date November 19, 2021
Time 13:00 - 17:00 (ICT)
Duration 3hr and 30 mins

Description

Description

Pipelines are useful tools for data professionals at all levels and within different industries. From analysts who want to build processes to automate their analyses, to data engineers building extract, transform, and load pipelines, or even data scientists building models that require that a series of steps occur on the data needed before making a prediction (e.g. tokenization, scaling, or one of the many feature engineering techniques available). With this in mind, the goal of this tutorial is to help data professionals from diverse fields and at diverse levels build pipelines that can move and transform data as well as make useful predictions given different sets of inputs.

The tutorial will emphasise both methodology and frameworks through a top-down approach. Several of the open source libraries included are Prefect, MLFlow, Scikit-Learn, XGBoost, FastAPI, pandas, and the HoloViz suite of libraries. In addition, the tutorial covers important concepts regarding data engineering, data analytics, and machine learning. Participants will learn concepts from the fields where the datasets came from as well, and build a foundation on how to reverse engineer data pipelines and other processes they find in the wild.

Audience

The target audience for this session includes analysts of all level, developers, data scientists and engineers wanting to learn how to create data pipelines for their work.

Format

The tutorial has a setup section, three major lessons of 50 minutes each, and 2 breaks of 10 minutes after each at the end of lesson 1 and 2. In addition, each of the major three sections contain exercises designed to help solidify the content taught to participants.

Outline

Total time budgeted including breaks - 3.5 hours

Introduction and Setup (~10 minutes) - Getting the environment set up. We will be using Jupyter Lab throughout but participants experiencing difficulties throughout the session will also have the option to walk through the tutorial using Binder - Quick breakdown of the session - Flash instructor intro

Data Engineering Pipelines (~40 minutes) - Intro to the datasets - ETL Pipeline Breakdown - Exercise (7-min)

10-minute break

Data Analytic Pipelines (~50 minutes) - Intro to the dataset - Interactive dashboard creation and customisation - Dashboard, main functions breakdown, and pipeline creation - Exercise (7-min)

10-minute break

Machine Learning Pipelines (~50 minutes) - Intro to the dataset - ML Pipelines breakdown - Model development - Exercise (7-min)

N|Solid


Workshop Session 4

Machine Learning Lifecycle Made Easy with MLflow

Karishma-Babbar

ABOUT KARISHMA

Instructor Karishma Babbar
Co-instructor Kalyan Munjuluri
Max Attendance 50
Level All
Requirements -
Date November 19, 2021
Time 14:30 - 16:00 (ICT)
Duration 75 mins

Description

In theory, the crux of machine learning (ML) development lies with data collection, model creation, model training, and deployment. In reality, machine learning projects are not so straightforward. They are a cycle iterating between improving the data, model, and evaluation that is never really finished. Unlike in traditional software development, ML developers experiment with multiple algorithms, tools, and parameters to optimize performance, and they need to track these experiments to reproduce work. Furthermore, developers need to use many distinct systems to productionize models.

In this workshop, we introduce MLflow, an open-source platform that aims at simplifying the entire ML lifecycle where we can use any ML library and development tool of our choice to reliably build and share ML applications. MLflow offers simple abstractions through lightweight APIs to package reproducible projects, track results, and encapsulate models that are compatible with existing tools, thereby, accelerating ML lifecycle of any size.

With the help of an example, we will show how using MLflow can ease bookkeeping of experiment runs and results across frameworks, quickly reproducing runs on any platform (cloud or local execution), and productionizing models on diverse deployment tools.

At the end of this workshop, you will be familiar with Key concepts, abstractions, and components of open-source MLflow: - How each component of MLflow addresses challenges of ML lifecycle - How to use MLflow Tracking during model training to record experimental runs - How to use MLflow Tracking User Interface to visualize experimental runs with different tuning parameters and evaluation metrics- How to use MLflow Projects for packaging reusable and reproducible models - How to use MLflow Models general format to serve models using MLflow REST API

The purpose of the session is to introduce the audience to MLflow and give a taste of the ML development lifecycle. It is intended at providing a breadth than depth survey of MLflow platform, and we leave the audience to experiment with it further through takeaway exercises.

N|Solid


Workshop Session 5

Knowledge graph data modelling with TerminusDB

Cheuk-Ting

ABOUT CHEUK TING

Instructor Cheuk Ting Ho
Max Attendance 30
Level All
Requirements 1. A computer with stable internet connection; 2. TerminusDB Desktop App or Docker image (a.k.a TerminusDB Bootstrap) which you can download from https://terminusdb.com/hub/download (FREE); 3. Python client for TerminusDB (require Python >=3.7); 4. An open mind and be ready to learn something new
Date November 19, 2021
Time 17:00 - 19:00 (ICT)
Duration 120 mins

Description

Storing data in a tabular format is not always ideal. Taking advantage of strong data in knowledge graphs can make handling complex data structure possible and data visualization easier. In this workshop, you will get all the basics to start modelling data in the terms of triples and building schemas of a knowledge graph.

For whom is this Workshop ?

Data scientist, engineers and researchers who have no prior experience in knowledge graph data modelling. In this workshop, we will start from the fundamentals - learning how to think in terms of triples to describe relations of different data objects. If your work involves data analysis, data management, data collaboration or anything data-related, this is a workshop for you to have a brand new insight into how data should be represented and stored.

Short Format of your Workshop - Overview-5 min, - Lecture - 30 mins, - Breaks- 10 minutes, - Hands-on training - 40 mins, - Closing - 5 mins

Workshop Agenda Overview-5 min In this session, we will go through the workshop structure, introduce TemrinusDB - the open-source tool that we use and pre-flight check to make sure everyone's set up is ready.

Lecture - 30 mins In this session, through slides and presentation, we will go through the fundamental construct of a knowledge graph: - What is triple - What are objects, documents, and other elements in a knowledge graph - Different types of properties Then we will show an example of how data that was represented in a relational database (tables joined with keys) can be reconstructed as a knowledge graph and the elegance of doing so.

Breaks- 10 minutes A short break, overrun buffer and answering questions.

Hands-on training - 40 mins At the start of this session, there will be a short tour and demo of how they can build a knowledge graph schema with the schema builder in TerminusDB. (10 mins) Then attendees will be given a dataset that is represented in tables and they will need to apply what they learnt in the lecture and construct a schema that works the best for it. They are encouraged to ask questions during this session. (20 mins) Finally we will be building the same schema with the Python client. (10mins)

Closing - 5 mins In this session, we will conclude what the attendee has achieved. We will also provide suggestions if they would like to continue learning how to work with knowledge graphs and acquire related skills - for example, using Python client to manage data in TerminusDB programmatically. What is required from attendees *

What Attendees will Learn * By the end of the workshop, you will be able to think like a knowledge graph expert and construct a proper schema to store your data in a knowledge graph format. You will acquire the skills that you need to build knowledge graphs in TerminusDB - an open-source graph database that enables revisional control and collaborations. Course Benefits * You will have learnt a new skill set that may assist you in your project in data science or research. You will have a new tool that you can better model your data and collaborate with others. Also, you gain all the prerequisites to use WOQL - a query language for knowledge graph and the TerminusDB Python client to manage, manipulate and visualize data in your knowledge graph.

N|Solid


Workshop Session 6

50 Powerful Tasks to Learn PyCharm Pro (Conducted IN THAI LANGUAGE)

All Participants get to receive 1 year PyCharm licence free

Worajedt-Sitthidumrong-workshop

ABOUT WORAJEDT

Instructor Worajedt Sitthidumrong
Max Attendance 20
Level All
Requirements Browser and Internet Connection
Date November 19, 2021
Time 17:00 - 19:00 (ICT)
Duration 120 mins

Description

Details to be confirmed

N|Solid


Workshop Session 7

Exploring Python Fundamentals Hands On

Naomi-Ceder

ABOUT NAOMI

Instructor Naomi Ceder
Max Attendance 20
Level All
Requirements -
Date November 20, 2021
Time 07:00 - 09:00 (ICT)
Duration 120 mins

Description

Have you sometimes wondered how something really works in Python? For example, how do slices work? Or how are list comprehensions and generator expressions different? In this workshop we'll create experimental classes, use the Python disassembler, and the timeit module to explore your questions about Python, and get you comfortable doing your explorations in the future. No advanced knowledge of Python is required - if you can write Python functions and have a very basic knowledge of classes, you can take this workshop and learn how to make your own Python experiments..

N|Solid

Contact