Amazon recently featured Omilia as a featured AWS case study, describing how we train our machine learning models in hours instead of months leveraging an elastic AWS infrastructure.

Omilia CEO Dimitris Vassos states

Creating and fine tuning an ASR engine is a CPU intensive business. Omilia runs deep neural network / machine learning training tasks over swarms of thousands of CPUS in the Amazon AWS cloud enabling us train and fine tune ASR in new languages in days (as opposed to weeks or months) and allowing us to offer a perfect conversational experience in more than 20 languages!

Read the full case study here.