Deep Learning on 33,000,000 data points using a few lines of YAML

May 4th, 2020 - Hamza Tahir


Over the last few years at maiot, we have regularly dealt with datasets that contain millions of data points. Today, I want to write about how the we use our machine learning platform, the Core Engine, to build production-ready distributed training pipelines. These pipelines are capable of dealing with millions of datapoints in a matter of hours. If you also want to build large-scale deep learning pipelines, sign up for the Core Engine for free here and follow along.


If you want to keep in touch with the latest blog posts, please subscribe to our RSS Feed

Sign up now - for free!