Deep Learning on 33,000,000 data points using a few lines of YAML

May 4th, 2020 - Hamza Tahir


Over the last few years at maiot, we have regularly dealt with datasets that contain millions of data points. Today, I want to write about how the we use our machine learning platform, the Core Engine, to build production-ready distributed training pipelines. These pipelines are capable of dealing with millions of datapoints in a matter of hours. If you also want to build large-scale deep learning pipelines, sign up for the Core Engine for free here and follow along.


If you want to keep in touch with the latest blog posts, please subscribe to our RSS Feed

Get early access

Be the first to see the platform in action.

* indicates required

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.