How to cultivate best-in-class Machine Learning models.

Here’s the problem I want to address:

It’s not trivial to compare a very diverse set of Machine Learning models and identify where each model stands out and/or where it can be improved.

Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) are the focal point of a vast amount of articles and books being written by researchers and practitioners.  In many instances, a common denominator is the claim that great AI algorithms are expected to be fast, accurate, and deliver novel insights.  And adding to that list, if working with those algorithms wasn’t hard enough already, a more recent trend also expects those well-tuned models to also get ethics and transparency. 

Data science teams certainly face a lot of pressure these days.  Can they succeed?

Sure they can!  It’s already a common practice to use a range of methods to evaluate a model’s performance. Some approaches include working with a confusion matrix and its many rate values (e.g. accuracy, precision, recall, sensitivity, specificity, F1-score), or using feature engineering, selection, and cross-validation to tune up classification/prediction models.

But to everyone’s despair, the number of variables and scenarios to analyze can very quickly escalate and spiral out of control … so what can help data scientists evaluate the best mix of input features, training processes, and model hyperparameters to create and deliver best-in-class model outputs?

I want to propose the following mix: Automation + Artificial Intelligence + Benchmarking

In this article I’m proposing a new strategy for teams that have to manage a complex suite of ML models as part of their data science initiatives. I hope to shed some light on the subject of model selection and optimization through the use of insights discovered using comparative performance analysis, sometimes referred simply as ‘benchmarking’.

Continue reading

Creating a real-time cross-language communicator

Here’s my latest fun project.  An iPhone app that performs real-time translation while leveraging Speech-to-Text and Text-to-Speech technologies.  For example, the app will listen to someone speaking Portuguese, will translate it to English, and speak the translation in English.  The app will also do the other way around. It will speak in Portuguese words pronounced in English.

The app is currently configured to help Portuguese and English speakers.  It helps them communicate with each other.  The idea is to have the iPhone sit between them… listening to them, translating in real-time all spoken/typed words, and speaking the translated words in the target language.

Currently, this is not a commercial/published app.   And although there might be similar apps already out there – I didn’t find anything exactly like this… I primarily created it so my son (who barely speaks Portuguese) can have a custom tool to help him communicate with many of his relatives (who barely speak English).  Let’s see how it plays out… 🙂

as sete cidades do arco irisMy inspiration for this project dates back to the early 80s, when my second grade teacher assigned a book for all students to read: “As sete cidades do arco-iris” (translation: the seven cities of the rainbow), a book by Brazilian author Teresa Noronha.  The storyline was about a kid taken to a different planet where people from the different cities spoke different languages. On his way to the cities the main character was given a device to carry around his neck that would translate everyone to him in real-time, and vice-versa, also translating his words to everyone.  Sadly, I no longer have that book.  But the concept for that real-time translation/communication device stuck with me and I finally have the tools to create something like that.

My App is written in Swift – Apple’s scripting language.  For the Translation and Speech-to-Text components I’m using Google’s APIs.  For the Text-to-Speech, I’m using Swift/iOS’s own Speech framework, which is available for free, unlike the Google API, which has a tiny, tiny cost.  But I might switch to Google’s Text-to-Speech API to try to implement a feature I outline 3 paragraphs below.

It amazes me how accurate Speech Recognition has become.  Not long ago, around 2010, while working for another company, I worked on a very comprehensive research to identify a good Speech Recognition framework and after evaluating the top free/commercial options available at that time, on average, things would only be correct about 60% of the time, at best.  It used to be that the only way to ensure good recognition rates was to define a controlled dictionary ahead of time to limit the search space.  Today, there’s no need to use controlled dictionaries.  It’s amazing how the quality of the task has improved with companies like Apple, Google, and Amazon now using (Deep) Neural Nets and complex models to train their services.

As for my prototype app, my next step is to come up with a way to detect the spoken language automatically.  Google used to have an API for language detection, but it doesn’t seem to be available anymore (?!)  Getting language detection in place will allow the device (i.e. the phone) to simply sit in front of the people having a conversation, without a need to press a button telling the app what language to hear.

The other feature I want to implement is allowing the Portuguese voice to come out of one audio channel (e.g. left) while the English voice comes out of the other channel (e.g. right).  That way, both people could use the same pair of earbuds to listen to each other, without the iPhone’s speaker be repeating everything for everyone around to hear.  But the Apple framework I’m currently using for the Text-to-Speech doesn’t seem to support that channel toggling so that feature will have to wait for some further research.

In any case, the current prototype seems to be working pretty well, and I’m looking forward to see people testing it out!   🙂


Posted by André Lessa

Working with your Twitter Followers/Following lists

This post is about how to get a simple list of Twitter Followers/Following that you can work with, using the data that’s already available for you – on your own browser.  It’s a concept similar to how bookmarklets work, but this approach doesn’t involve installing anything on your machine.

This quick solution runs on your own Chrome browser,  requires no API dev work, requires nothing to be installed, requires no additional page requests, and requires no 3rd-party services. It’s very simple, you control it, and it uses your browser’s own inspection tools. It’s just Javascript using XPath to navigate the page’s DOM.

  1. Use Chrome (it might work with other browsers, but haven’t checked);
  2. Go to the followers/following section and scroll all the way to the bottom so they’re all visible on the page;
  3. Right click the web page and choose “Inspect”;Screen Shot 2016-08-12 at 12.45.18 PM
  4. You’ll see something like what I’m showing here. Click on Console; Screen Shot 2016-08-12 at 12.37.20 PM
  5. Paste the following Javascript code on the prompt and press Return/Enter. Note that it’s one very long line;

a=$x("//div[@class='ProfileCard-userFields']");b=[];for(var i=0;i<a.length;i++){b.push("@"+$x(".//span[@class='u-linkComplex-target']/text()",a[i])[0].textContent+"\t"+$x(".//a[contains(@class,'ProfileNameTruncated-link')]/text()",a[i])[0].textContent.replace(/(^\s+|\s+$)/g,''));b.join("\n")}

As a result, you get a newline-separated list of handles and names that can be easily copied, and then pasted into a text editor or spreadsheet. Handles and names are separated with tabs so you can easily separate the data into columns for filtering and/or sorting.

This seems to work fine today… but if the design of the page loaded by your browser changes, this will likely stop working.

 

Open Sourcing a sleek intelligence API

Back in 2011-2012 I put a lot of time and energy into creating a simple and sleek JSON API framework for quick intelligence prototyping; an API capable of managing JSON objects, and performing a lot of smart computing tasks. Fast forward to 2016, I decided to open source the codebase, sharing it with the world because I believe this framework, although a bit outdated by now, still has the potential to help others.

pie.small

SQLpie™ is an open source API framework that uses all sorts of SQL statements to creatively perform all kinds of computing tasks (thus, SQLpie). With SQLpie, developers can store JSON objects in a SQL database and run a lot of information retrieval and machine learning tasks on the data, covering areas such as: Text Classification, Text Summarization, Collaborative Filtering (item recommendation and similarity), Boolean/Vector Search, Document Matching, TagClouds, etc… The project is 100% written in Python and runs on top of a MySQL database.

The SQLpie project went after a lot of big challenges, and although I do not advocate that it includes the best implementations to handle all of those tasks, I believe the combined effort can help people quickly prototype new ideas, and hopefully, create new and awesome products.

Its API services can help developers with the following type of questions:

How can one store JSON documents? (answer: documents services)
How can one keep track of document relationships? (answer: observations services)
What documents exist for query Q? (answer: indexing and search services)
What documents are located near location L? (answer: geosearch service)
What top keyphrases and keywords relate to query Q? (answer: tagcloud search service)
What are the key sentences, entities, and terms associated with document D? (answer: summarization service)
What documents are similar (or relate) to document D? (answer: document matching service)
Will user U like document D? (answer: classification service)
How likely is user U to like document D? (answer: classification service)
What documents is user U likely to love based on user data? (answer: recommendation service)
What other users have a document taste similar to user U? (answer: similarity service)

If you’re a developer, learn more at SQLpie.com. The project is hosted on Github.

Cheers,
~ Andre Lessa

Benchmarking Engine: A new revenue stream opportunity with business data you already have.

Let’s start with the kind of question you are likely to ask yourself the first time you come across something new.

“What do I need a Benchmarking Engine for?”

A possible short answer is this: To efficiently and automatically identify opportunities for business performance improvement, customer/vendor satisfaction, and revenue generation.

Now for a more comprehensive answer… Continue reading

Turning structured data into valuable well-written insights

 

Today is a great day.

OnlyBoth.com, my new company just launched. We’re really excited about what we created and all the press we’re getting.

Image

Our technology is really cool. It takes structured data (think big spreadsheets!) and finds insights that are hidden in plain sight. But not just that, it ALSO writes them up in perfect English, just like if a real person had analyzed the data and written a report about it.

To get the word out about the technology we created an application that leverages US College data. All the insights were created using an automated process. How much insight data has the software generated for this first application? Well, think something equivalent to 30 “Moby Dick”, or 65 “The Hobbit”, or 80 “Philosopher’s Stone” books.

Check it all out at OnlyBoth.com

Cheers,
~ Andre Lessa (@lessaworld)

 

4 reasons why building the Furious Monkeys game was an awesome experience!

A few years ago JP and I were browsing video games at a toy store and at one point he said something like “Dad, let’s make a game. I got an idea.” He started writing ideas down and just like any great Product Manager he got all the core requirements for the game down in a flash and after a little brainstorming the name Furious Monkeys came up. JP nicknamed the game “F.M.” and although the game ideas kept changing until the very last minute prior to pushing the very first version of the game to the iTunes App Store, the overall arch stayed the same.

The first reason F.M. is awesome is because I got to see JP go crazy about everything he wanted for the game and I had to negotiate features with him. He’s a really demanding Product Manager for a teenager.

Image

I really enjoyed writing the game. Apart from having an excuse to learn a bit more about iOS and the process of getting apps submitted to Apple. I also got to spend time crafting some custom audio effects and drawing a lot of original artwork for the game. For example, I couldn’t find a nice royalty-free sound for the whooshing sound of throwing a banana, and real bananas don’t make any sound when you throw them … so I had to invent the sound myself, just like those cool guys did when filming the original star wars movies.

And finally, how many indie games get to have their own celebration cake 😉

Image

If you want to give Furious Monkeys a try, you can download the free version, which comes with the first 5 levels, and if you master the speed of those levels and like the game, you can get the full version as that allows you to throw as many bananas at the birds as your skills allow.

Cheers,
~ Andre Lessa (@lessaworld)

Running HANA Client and HANA Studio on a Macbook

Although in SAP HANA 1.0, Rev 70, the most complete developer tooling is only available for Windows and Linux, it is possible to install HANA in a Virtual Machine on a Mac.

In my case, I decided to go with a Linux virtual machine, one running Ubuntu. I first tried VirtualBox but I had some issues getting the virtual machine’s resolution to support a decent screen resolution. I also wasn’t making a lot of progress getting a lot of my folders to be shared between the Mac and the Ubuntu virtual machine. I then decided to give Parallels a try. Unlike VirtualBox, Parallels is a paid product but they were offering me a 14-day free trial so I quickly decided to give it a try. The installation was a breeze and the integration with my Mac, amazing. I’m definitely keeping it.

With a Ubuntu Linux 13.04 installation ready, I was ready to install HANA Client and HANA Studio.

1. Downloaded the two files I needed, and copied them to a directory of my choice. If you have a revision number different than 70, make sure to update all the commands in this tutorial accordingly.

sap_hana_client_linux64_rev70.tgz
sap_hana_studio_linux64_rev70.tgz

2. Extracted the file contents.

$ tar zxvf sap_hana_client_linux64_rev70.tgz
$ tar zxvf sap_hana_studio_linux64_rev70.tgz

3. First I installed the HANA Client. After changing to the client files directory, I executed the installation script.

$ cd sap_hana_70_client_linux64/
$ sudo ./hdbinst -a client

3a. If it turns out that the execution permissions get lost when you’re moving files around, you’ll need to re-assign the execution permissions before running the installation script.

$ cd sap_hana_70_client_linux64/
$ chmod +x hdbinst
$ chmod +x hdbsetup
$ chmod +x hdbuninst
$ chmod +x instruntime/sdbrun
$ sudo ./hdbinst -a client

4. The HANA Studio requires the Java Runtime. Check your system by running the following command. If Java is not found, you’ll need to install it (see 4a).

$ which java

4a.To install Java in Ubuntu, simply run the following command.

$ sudo apt-get install default-jre

5. With Java ready, you can go ahead and install the HANA Studio.

$ cd sap_hana_70_studio_linux64/
$ sudo ./hdbinst -a studio

5b. Again, if it turns out that the execution permissions get lost while moving files around, you’ll need to re-assign them before running the installation script.

$ cd sap_hana_70_studio_linux64/
$ chmod +x hdbinst
$ chmod +x hdbsetup
$ chmod +x hdbuninst
$ chmod +x instruntime/sdbrun
$ sudo ./hdbinst -a studio

6. If everything goes well, and you go with all the default values, you should end up with everything installed under the /usr/sap/ directory.

7. To run the HANA Studio, you just need to navigate to the installation directory and run the following command. Doing so will launch the HANA Studio graphical application. If the applications launches without any errors, you’re ready to roll and start configuring your project.

$ cd /usr/sap/hdbstudio
$ ./hdbstudio

8. To test if the HANA Client has been installed properly, you can run the following command:

$ cd /usr/sap/hdbclient
$ sudo ./hdbsql

8a. Note that if you get an error like “error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory“, you’ll need to install the missing libaio-dev package by running the following command.

$ sudo apt-get install libaio-dev

8b. If the HANA Client is installed correctly, you’ll see a greetings message after calling the client:

$ sudo ./hdbsql
Welcome to the SAP HANA Database interactive terminal.
 
Type: \h for help with commands 
 \q to quit
hdbsql=>

And that’s it. Getting both the HANA Studio and the HANA Client to run on a Mac is really simple, as long as you’re willing to spend a few extra minutes setting up a Linux virtual machine. In total, it took me probably somewhere between 30 and 45 minutes to get it all done but if you follow my instructions you should get it done a lot quicker as you won’t have to go through the same hiccups I went through 🙂

Cheers,
~ Andre Lessa (@lessaworld)

 

Multiple COUNTS within the same SELECT statement

Here’s another interesting problem that I solved. This one relates to SQL Server.

The problem:

To write a single database query that would allow me to get multiple row counts depending on certain pre-defined conditions. Let me make it clear … I had a table called nodes and I needed to count how many times certain non-unique records had been saved. Since the table was huge in size, the idea of using multiple queries scared me so using a single query to solve the problem was the way I found to optimize the performance and the algorithm.

The solution:

Many developers are used to write statements like select count(*) from certaintable where tablecolumn = specialcondition … that works great when you just need to count one thing at a time. My solution was to approach the problem by moving the where condition to the select section of the statement.

The query:

select
sum(case when node_id < 300 then 1 else 0 end),
sum(case when node_id > 200 then 1 else 0 end),
sum(case when node_id between 200 and 300 then 1 else 0 end)
from node

The recipe:

The secret sauce was to use the sum/case combo instead of the standard count function. By testing each condition I wanted with a case statement, and adding up the number of times each condition turned out to be true (using the sum function), I was able to achieve my goal.

How to shift the elements of an array

Just recently I came across this request to shift an entire slice of an array while keeping the algorithm cost to its bare minimum. Let’s put it this way. It’s pretty simple if you have something like ABCDE and you want to shift the elements so it becomes CDEAB. Now, the big problem is this – what if you have 1 billion bytes in this array and you need to perform sort of the same operation? That’s trick, right? Most likely, I wouldn’t be able to afford an additional high amount of memory to perform this.

So here’s what I came up with. The code is written in Python.

def f_ArrayExercise(a, i):
    n = 0
    while i <= len(a) - 1:
        a[i], a[n] = a[n], a[i]
        i += 1
        n += 1
        if len(a)%2 and len(a) > 1:
            a[len(a)-1], a[len(a)-2] = a[len(a)-2], a[len(a)-1]
    print a

The secret here is to swap one element at a time in order to save in memory space. However, note that with minor modifications we can easily swap pre-determined chuncks of the array at a time instead of limiting ourselves to one element – just in case we know we can afford a couple more elements. That would help optimize the processing time.

The first argument of the function is the array itself, the second argument is the exact spot you want to use to start shifting the array.

When you play around with this function, you get something like this:

>>> f_ArrayExercise([], 2)
[]
>>> f_ArrayExercise([“A”], 2)
[‘A’]
>>> f_ArrayExercise([“A”,”B”], 2)
[‘A’, ‘B’]
>>> f_ArrayExercise([“A”,”B”], 1)
[‘B’, ‘A’]
>>> f_ArrayExercise([“A”,”B”,”C”], 2)
[‘C’, ‘A’, ‘B’]
>>> f_ArrayExercise([“A”,”B”,”C”,”D”], 2)
[‘C’, ‘D’, ‘A’, ‘B’]
>>> f_ArrayExercise([“A”,”B”,”C”,”D”,”E”,”F”,”G”,”H”], 2)
[‘C’, ‘D’, ‘E’, ‘F’, ‘G’, ‘H’, ‘A’, ‘B’]
>>> f_ArrayExercise([“A”,”B”,”C”,”D”,”E”,”F”,”G”,”H”, “I”], 2)
[‘C’, ‘D’, ‘E’, ‘F’, ‘G’, ‘H’, ‘I’, ‘A’, ‘B’]
And that’s the end of this Array Manipulation Exercise.