Categories
Data Science

AWS & Udacity Offer Scholarships for Premium Machine Learning Engineer Nanodegree

Once in a while, great companies partner with Udacity to offer scholarships and help students build highly in-demand skills in the field of Data Science.

This time, Amazon Web Services sponsored the AWS Machine Learning Scholarship Program, in which we’ll have the opportunity to learn the foundations, as well as more advanced skills to become professional Machine Learning Engineers, while learning to use some of the most in-demand tools and technologies in the AWS ecosystem.

Requirements

  • The applicant must be 18 or older🔞

Who should apply

This program is oriented to developers of all skill sets, from beginner and intermediate machine learning professionals.

How it works

The program takes place 100% online and has 2 phases:

  • Phase 1: Scholarship Foundations Course
  • Phase 2: Full Scholarship for a Udacity Nanodegree program

Phase 1: Scholarship Foundations Course

In the foundations course, students will learn how to write production-level code and practice object oriented programming, as well as deep-learning techniques to apply in real-world scenarios.

“The course will help students develop object-oriented programming skills including writing clean and modular code and also introduce key AWS machine learning technologies, namely Amazon AI Services and Amazon AI Devices and apply their skills in the AWS lab environment.”

From May 19, 2020 you’ll be able to enroll for free in the Foundations Course and you will have until July 31, 2020 to complete it.

This course should take you around 3-5 hours per week if you start in May, but you can follow the lessons at your own pace. Once you complete the course, you’ll receive a certificate for having completed the course. Finally, you will receive instructions to take an online assessment quiz (within the aforementioned period) to be eligible for the Phase 2.

Phase 2: Full Scholarship for a Udacity Nanodegree program

The top 325 scorers will receive the full scholarship for Udacity’s popular Nanodegree program: AWS Machine Learning Engineer.

These kind of Nanodegrees are usually priced at around 400$/month, so it’s definitely an opportunity you don’t want to miss!

In this nanodegree you will get the chance to learn advanced machine learning techniques and algorithms.

“This program will offer world-class curriculum, a groundbreaking classroom experience, industry-leading instructors, thorough project reviews, and a full suite of career services.”

Students selected for Phase 2 who complete the full Nanodegree program will be awarded a Nanodegree certificate.

The nanodegree students should expect to invest about 10 hours per week during the program, which should run for 2 months.

How to enroll

The enrollment opens on May 19, 2020. However you can sign up right now to be notified when the course opens its (virtual) doors.

Some personal thoughts

Even if you don’t pass to the Phase 2, I think it’s worthy to complete the Foundations Course, as it’s a great opportunity to sharpen all the basics of ML and learn about these highly on-demand technologies in the AWS ecosystem.

You’ll be able to interact with other students in the same community and help each other. Additionally, if you were looking for some motivation to start learning ML, this is a great challenge to get in the habit of studying a bit every week while having a sense of community that will encourage you to continue making progress.

I will definitely enroll to this course, let me know if you’re joining me!✨


Thank you for your time and I really hope this post was informative 😊

See you in the next one! 🚀

Categories
Data Science

How to Deploy Models at Scale with AI Platform

Usually, when we all start learning Machine Learning, we find a ton of information about how to build models, which of course is the core of the topic. But there’s an equally important aspect of ML that is rarely taught in the academic world of Data Science, and that is how to deploy these models. How can I share this useful thing that I’ve done with the rest of world? Because, at the end of the day…that’s the purpose of our job right? Making people’s lives easier 😊.


In this post, we’ll learn how to deploy a machine learning model to the cloud and make it available to the rest of the world as an API.

The Workflow

We’re going to first store the model in Firebase Storage to deploy it to AI Platform where we can version it and analyse it in production. Finally, we’re going to make our model available through an API with Firebase Cloud Functions.

Image courtesy of the author.

What’s the AI Platform? 🧠

AI Platform is a service of Google Cloud Platform (GCP) that makes it easy to manage the whole production and deployment process by not having to worry about maintaining your own infrastructure and making you pay only for usage. This will enable you to scale your product massively for fast growing projects.

You can try this and many more experiments on your own FOR FREE by making use of GCP’s 12 month, $300 free trial to get you started.

What are Firebase Cloud Functions?🔥

Essentially, for the purpose of this post, the cloud function will work as an API. We will make the predictions of our model available through a link that any person can make requests to, and receive the response of our model in real time.

What you’ll need

  • A model ready to share ✔
  • A Google account ✔

yep, that’s all

Getting started

Just for the sake of simplicity, I’m going to assume the model was developed in Python and lives in a Jupyter Notebook. But of course, these steps can be adapted to any other environment.

1. Sign in to Firebase

First, sign in to the Firebase Console with your Google account and create a new project. Now you’re inside the Firebase Dashboard, go to the project settings > Service accounts > Firebase Admin SDK, (in this case) you select the Python option and click on Generate new private key. This will give the JSON file of your service account that you can save in your notebook’s directory.

Then, install the Firebase Admin SDK package: pip install firebase-admin

2. Store model in Firebase Storage

Once you’ve trained and tested your model it’s ready to upload to AI Platform. But before that, we need to first export and store the model in Firebase Storage, so it can be accessed by AI Platform.

If you’re using a notebook, create a new cell at the end and add the following script. This will enable the usage of your firebase account:

import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore# Use a service account
if (not len(firebase_admin._apps)):
	cred = credentials.Certificate(r'service_account.json')
	firebase_admin.initialize_app(cred)db = firestore.client()

Now, to run the following code you’ll need to get the Project ID, which you can find again in your Firebase project settings.

Once we have our Project ID we upload the model by running the following code (you should first change it with your Project ID).

from sklearn.externals import joblib
from firebase_admin import storagejoblib.dump(clf, 'model.joblib')
bucket = storage.bucket(name='[YOUR PROJECT ID HERE].appspot.com')
b = bucket.blob('model-v1/model.joblib')
b.upload_from_filename('model.joblib')
print('model uploaded!')

Now we can verify that the model has been correctly uploaded by checking in Firebase Storage inside the specified directory (which in our case is model-v1/).

3. Deploy model in AI Platform

Now that the model has been stored, it can be connected to AI Platform.

We need to enable a couple of APIs in Google Cloud Platform. On the left panel, inside the Library section, we look for the APIs “AI Platform Training & Prediction API” and “Cloud Build API” and enable them.

Now, on the left panel we click on AI Platform > models and we Create new model and input the corresponding information.

Once we’ve created the model it’s time to create a version of it, which will point to the .joblib file that we previously stored. We click on the model > new version and fill the information. It’s important that we choose the same Python version that we used for training the model. We choose scikit-learn as the framework. When specifying its version, we can get it by running the following code in our notebook.

import sklearn
print('The scikit-learn version is {}.'.format(sklearn.__version__))

When choosing the ML Runtime version, you should select the recommended one. The machine type can be left by default for now.

Finally, we specify the folder in which our .joblib file is located. It’s important to select the folder, not the file! The rest of the fields can be left by default and save. At that moment, an instance of our model will be deployed in AI Platform.

Now, we’ll be able to make predictions from the command line or from other Google APIs, such as Cloud Function, as we’ll see next. Additionally, we’ll be able to get some performance metrics on our model.

4. Create the Cloud Function

Let’s see how to implement the function!

We’re going to run some commands on the terminal, but for that, you’ll need to ensure you have Node.js installed in your computer. The following commands are specific for Windows but you should be able to use them in Unix and Mac OS devices by adding sudo at the beginning of each of the commands.

Let’s start by installing the Firebase client: $ npm install -g firebase-tools

We access the Google account: $ firebase login

Initialize a new project directory (make sure you’re in the directory you want to initialize it in):$ firebase init

When running this last command you’ll be asked several questions. When asked about the Firebase project that you want in the directory, you have to choose the one that contains the ML model that we previously exported. Select JavaScript as programming language. We don’t use ESLint, so answer no. And finally, answer yes to the installation of dependencies with npm.

Once the project has been created, the directory will have the following structure:

Inside this directory, we’ll only modify the index.js and the package.json files.

We install the packages of the Google API: $ npm i googleapis

Now we check the packages have been installed correctly by opening the package.json file. In case you want to use any other external package in your code you should also add it in this file with its corresponding version.

For now, it should have a structure similar to this:

"dependencies"​: {
​"firebase-admin"​: ​"~7.0.0"​,​
"firebase-functions"​: ​"^2.3.0"​,​
"googleapis"​: ​"^39.2.0"​
}

I’ll briefly explain what they do:

  • firebase-admin​: It’s the Admin SDK, which allows to interact with Firebase from privileged environments.
  • firebase-functions​: It’s an SDK for the definition of Cloud Functions in Firebase.
  • googleapis: It’s the client library Node.js for the usage of Google APIs.

Now let’s see the implementation of the function (we are editing the index.js file), which you can also find in this GitHub repository. As an example, I’ll be using the code to access a simple fake-account detection model.

We start by loading the firebase-functions​ and ​firebase-admin modules.

const​​ functions ​​= ​​require​(​'firebase-functions'​);
const ​​admin​​ =​​ require​(​'firebase-admin'​);

We load the googleapis module and add the reference to the version 1 of ml.

admin​.​initializeApp​(​functions​.​config​().​firebase​);
const​​ googleapis_1​​ =​​ require​(​"googleapis"​);
const​​ ml​​ = ​​googleapis_1​.​google​.​ml​(​'v1'​);

The requests are going to be sent to an http function.

exports​.​predictSPAM​​ = ​​functions​.​https​.​onRequest​(​async​​(request,​​response)​​=>
{

We specify the input values of the function. In this example, I’m getting some data about the social media account that my model will use to classify as fake or not. You should specify the fields that you plan to input afterwards to your model.

const ​​account_days_old​​ = ​​request​.​body​.​account_days_old​;​
const​​ followers_count​​ =​​ request​.​body​.​followers_count​;​
const ​​following_count ​​= ​​request​.​body​.​following_count​;
​const​​ publications_count​​ =​​ request​.​body​.​publications_count​;

After that, we build the input of the model, that is, the input parameters that we’ll send to the model to get the prediction. Note that these inputs should follow the same structure (order of features) with which the model was trained.

const​​ instance ​​= 
[[account_days_old,followers_count,following_count,publications_count]]

Now, let’s make the request to the Google API, this request needs authentication, which will connect our Firebase credentials with Google API.

const ​​model​​ =​​ "[HERE THE NAME OF YOUR MODEL]"​;​
const​ { ​credential​ } ​=​​ await
googleapis_1​.​google​.​auth​.​getApplicationDefault​();

After storing the name of our model in a variable (the name should be the same you gave it in the AI Platform console), we make a prediction call to AI Platform by sending our credentials, the name of the model and the instance that we want the prediction for.

const ​​modelName​​ =​​ `projects/[YOUR PROJECT ID HERE]/models/​${​model​}​`​;
const​​ preds ​​= ​​await ​​ml​.​projects​.​predict​({
auth​:​​ credential,
name​:​​ modelName,
requestBody​:​​ {
​instance
​}​
});​
response​.​send​(​preds​.​data​[​'predictions'​][​0​]);
});

5. Deploy the Cloud Function as an API

Once we’ve created the cloud function that accesses the model, we just need to upload it to Firebase to deploy it as an API.

To upload the Firebase function we run the following command in the terminal: $ firebase deploy --only functions

Once it has finished loading, a URL is obtained through which the function will be accessible, which can be found by logging into Firestore, in the Functions section, under Request in smaller print.

And that’s all, now your model is up and running, ready to share! 🎉🎉🎉

You can make requests to this API from a mobile app, a website…it could be integrated anywhere!

6. Test your API with Insomnia

This is, of course, an optional step, if you followed the previous guidelines, your model should be ready to receive requests. However, as a programmer, I like to test things to check everything works fine.

My favourite way to test APIs is by using Insomnia. Insomnia is a REST API client that lets you test your APIs easily. This free desktop app is available for Windows, MacOS and Ubuntu. Let’s check if our newly made API works properly!

Once we’ve installed the desktop app we can create a new request.

We’ll write the request name and choose POST as a method and JSON for its structure.

Once we’ve created the request, we copy the URL of the cloud function and we paste it in the top bar.

We will now write the request following the format that we specified in the function, in my case, it’d be like this:

{  
"account_days_old": 32,
"followers_count": 162,
"following_count": 152,
"publications_count": 45,
}

We now hit SEND and we’ll get the response, as well as the response time and its size. If there were any errors you should also receive the error code instead of the 200 OK message.

The response that you get will, of course, vary depending on your model. But if everything works fine, then congrats! You’re ready to share your model with the rest of the world! 🌍


If you made it this far, thank you for your time and I hope you got some value from this post😊

See you in the next one! 🚀

Categories
Tools & Tech

The Git Cheat Sheet

Git is one of the most popular Version Control Systems out there, you can think of it as a way to take snapshots (commits in Git nomenclature) of your code in a specific state and time, just in case you mess things up later and want to go back to a stable version of your code. It’s also a great way to collaborate if you combine it with GitHub.

Git is free and open source. You can download it from the official website. Once it’s installed you should be able to run Git commands on your terminal.

A couple of things you should know before starting to use Git:

  • When you want to keep track of the files in a project you put them inside a repository. A repository is basically a directory in which version control is enabled and always vigilant of what you put in there. It will know whenever any changes are made to those files, and it will help you keep track of them.
  • Each commit has an identifier in case you need to reference it later, this ID is called SHA and it’s as string of characters.
  • A working directory contains all the files you see on your project directory.
  • The staging index is a file in the Git directory that stores the information about what is going to be included in your next commit.

Create or Clone a repository

Create or clone a repository on the current directory.

  • Create repository from scratch: git init
  • Clone an existing repository: git clone <https://github.com/>...
  • Clone repository and use different name: git clone <https://github.com/>... new_name

Display information

  • Determine a Repo’s status: git status
  • Display a Repo’s commits: git log
  • Display a Repo’s commits in a compact way: git log --oneline
  • Viewing Modified Files: git log --stat
  • Viewing file changes: git log -p
  • Viewing file changes ignoring whitespace changes: git log -p -w
  • Viewing most recent Commit: git show
  • Viewing A Specific Commit: git show <SHA of commit>

Add

“Staging” means moving a file from the Working Directory to the Staging Index.

  • Staging Files: git add <file1> <file2> … <fileN>
  • Unstaging files: git rm --cached <file>...
  • Stage all the files: git add .

Commit

Take files from the Staging Index and save them in the repository.

  • Commit staged files: git commit This command will open the code editor. Inside the code editor you must supply a commit message, save the file and close the editor.
  • Commit files without opening the code editor: git commit -m "Commit message"

Git Diff

  • See changes that have been made but haven’t been committed yet: git diff

Tagging

Tags are used as markers on specific commits. These are really useful to assign a version to the code.

  • Add tag to the most recent commit: git tag -a <tag(v1.0)>
  • Add tag to specific commit: git tag -a <tag(v1.0)> <SHA of commit> -a is used to create an annotated tag, which includes extra information such as the date of creation and the person who made it. It is usually considered a good practice to add this flag.
  • Display all tags in the repository: git tag
  • Deleting A Tag: git tag -d <tag-name>

Branching

When a commit is made in a repository it’s added to the branch you’re currently on. By default a repository has a branch called master. Specially when experimenting with new features on your code, it is often useful to create a separate branch that acts as a safe isolated environment from your last commit. You can switch between branches and the commits will only be added to the one you’re currently on.

  • List all branches: git branch
  • Create new branch: git branch <branch-name>
  • Delete branch: git branch -d <branch-name> You cannot delete a branch you’re currently on. You cannot delete a branch if it contains any commits that aren’t on any other branch.
    • To force deletion: git branch -D <branch-name>
  • Switch between branches: git checkout <branch-name>
  • Add branch on a specific commit: git branch <branch-name> <SHA of commit>
  • Start branch at the same location as the master branch: git branch <branch-name> master
  • See all banches at once: git log --oneline --decorate --graph --all

Merging

When a merge is performed, the other branch’s changes are brought into the branch that’s currently checked out.

  • Perform merge: git merge <branch to merge in>

Correcting stuff

  • Modify last commit message: git commit --amend

Revert a commit

This will create a new commit that reverts or undos a previous commit.

  • Undo the changes made in a commit: git revert <SHA of commit>

Reset a commit

This will erase commits.

  • Reset commit: git reset <reference to commit>

Depending on the flag you add you’ll obtain a different result:

  • --hard flag to erase commits
  • --soft flag to move the committed changes to the staging index
  • --mixed flag to unstage committed changes

If you want to learn more about Git, Udacity offers this great course that covers all the basics and many more concepts in depth (for free!). I did this course myself and it served as an inspiration to make this summary about some of the main concepts I learned and now use in my projects.

Hope you got some value from this post 😊 see you in the next one!

Categories
Tools & Tech

Here’s Why You Should Buy a Kindle

Looking back at all the presents that I’ve ever been given in the course of my life, the Kindle I received for my graduation has been one of the most game changing. This little gadget has transformed the way I consume books. By reducing the friction of reading it has helped me read more frequently, and most importantly it has made me a more productive reader.

In this post I’ll be discussing the main features that make this device a must have for readers. Note: This list is not ordered by relevance.

  1. Instant word translation: ‘When reading on Kindle you get the meaning directly from the dictionary just by clicking any word. You’ll also get results from Wikipedia and Google Translate when connected to Wi-Fi. This a key feature for me as I usually read in English (which is not my native language) so I don’t have to reach for the phone whenever I don’t know the meaning of a word.
  2. Read in any light condition: Before Kindle when I tried to read on the bus on the way back home it was already too dark and I couldn’t see a thing. Now I use the adjustable light of Kindle which makes it easy to read in low-light conditions without damaging my eyes.
  3. Easy one-handed reading: I read whenever I have the chance, and that sometimes means reading while I’m having breakfast. Before Kindle I ended up grabbing the book one-handed in a weird way so I could eat breakfast with the other hand. Fortunately, those days are long gone😂. Now I can uphold it anywhere as if it were a phone.
  4. Buy instantaneously: Even though I love going to libraries on the search of the next book I must say that being able to buy a book anytime is a big plus, specially for whenever you don’t find a book so easily or in a certain language.
  5. Save paper: If you care about the environment, this one’s for you🌱. You can reduce your paper usage by reading on Kindle. Let’s try not to contribute to deforestation as much as we can 💚.
  6. A portable library: I guess this is the obvious one, but also the most impactful for those who travel. In a Kindle you can bring a ton of books always with you without worrying about how you’re gonna fit them in the suitcase.
  7. Water resistant: It’s not like you want to read in the shower, but it feels safer to know that even if you’re outside while it’s pouring the Kindle is going to be fine.
  8. Notes and highlights: Being able to highlight and add side notes to the books I’m reading has been a huge game changer. I usually read non-fiction and enjoy highlighting the main ideas, export and check them later.
  9. Progress info: Kindle gives you information about how much time or pages are left until you finish a chapter/book.
  10. Cheaper books: Kindle books are usually cheaper than the hardback ones.
  11. Battery life: At Amazon they claim “A single battery charge lasts weeks, not hours.” and this is entirely true, I use it every day and lasts a surprisingly long time.
  12. Snippets and recommended highlights: As Amazon says, “With Word Wise, you can see simple definitions and synonyms displayed inline above more difficult words while you read.” You can actually choose the level of difficulty for which you want to receive these instant definitions.
Word Wise definitions and synonyms on Kindle.

Additionally, there’s another feature that shows which are most highlighted quotes of the books. Note: you’ll need Wi-Fi connection to use these features.

13. Goodreads sync: If you’re a book nerd you might have heard about this social network. With Kindle it’s easy to synchronize the book progress with your Goodreads account and write reviews directly from it.


These were the main points I could come up with, but I’m sure there are more.

Of course, you don’t get that “feeling of reading a hardback book”, but for me the aforementioned benefits far outweigh the lack of that sensation. Kindle has highly improved my reading experience and overall, it has made a better reader.

Hope this post was helpful in some way. See you in the next post 😊