Machine Learning App in Thunkable.

Sofía Galán
5 min readApr 4, 2021

--

Part 3 of Machine Learning + Cloud Run + Thunkable

Overview of this tutorial.

PART 1 — Design and train your Machine Learning Model with Teachable Machine

PART 2 — Custom API for Keras Models using Cloud Run

PART 3 — Machine Learning App in Thunkable

PART 3 — Machine Learning App in Thunkable.

So you made it this far. Congratulations. Here comes the easy part. Integrating it to Thunkable. We will be using the web API component from Thunkable.

The documentation explains it thoughtfully so if you haven’t used it. Please give it a read so you’ll get a better grasp of the following part of the tutorial.

So for my design, I’m going to add the following components:

  • Camera or your photo library
  • Cloudinary
  • Web API

Create a new project and add all the design elements:

So I’m going to create a new project, I’m going to use the brand new and shiny UI interface from Thunkable. You can use the old UI as well.

So I’m going to add a way to view my image, a way to take a photo with a button and labels to display the result. Also, I’m adding a loading icon because it takes a while to get the result. It’s really simple.

To add cloudinary to your project just go to the settings tabs and scroll all the way down. You’ll see a section called Cloudinary Settings. Add all the information and we’re good to start building our blocks.

For my blocks I’m going to add a web API component and configure it to the link we got from our cloud function:

Blocks in Thunkable

So let’s go by all the code. First, I want to declare some variables to store information. I need a place to store the url I get from cloudinary so I can use it to display the picture and as a parameter for my API.

After I declared my variables I want to set the basic aspects when I start my application. So I need to set the loading icon to false because I’m not loading anything and set my image to false.

Ok so here is the big part of the code. Let’s break it down into pieces.

So first I change my loading icon to true and set my answer to a blank string. Afterwards I get the url using the camera’s components. After some testing I found some lag with retrieving the url. I really don’t know why but to solve it what I did is put a timer between my image and the web api block.

Then what I did is set the parameters for my API using the web api component. I used an objects pink block to do it this way. After I set the parameters I called the API with a POST request with the parameters needed for my cloud run API to work. When it finished, if the status was 200 I called a function with my response. If not I compiled the error in a label for further analysis and debugging-

For my function called Check ML Variables what I did is first divide the paramters into two. You can actually clean this more in python. Keep that in mind for future APIs. I then used the variables to compare them. If one was bigger than the other then the answer’s text changed to value captured. At the end I regardles of the answer I just changed the loading icon visiblity to false.

If you want to check it out for yourself, you can make a copy of the project you can do ti by clicking here. Below, there is a picture of the app working, I tried it out with a photo of Frida.

We are done!

To be honest, it’s not the best solution but it is one. It’s a bit laggy and not as fast as I would like it to be but it works. If you’re not doing anything related to machine learning maybe you can use that to implement other things. For example, for my VR + Thunkable project I use this exact method.

Anyways, I look forward to reading all the feedback and see if anyone does something with this huge tutorial. It’s a bit daunting especially if you’re not used to coding. But it’s worth the try! Happy Thunkin’ and especially good luck to any girl out there learning how to code :)

Any examples and links will not work after the release of this project. This project is a volunteer effort from Mexico City’s Technovation Chapter. I don’t make anything from this and everything is open source.

--

--

Sofía Galán

SWE @ Axiom Cloud / How-To Guides & Tutorials for Afterwork DIYs