Skip to content
feature image for post with title: Why Generative AI is the tipping point for wider participation in ML

Why Generative AI is the tipping point for wider participation in ML

Foundation Models, API’s and Serverless Compute are key to lowering the barrier to entry and paving the way for the disruptors.

  • Digest time

    8 min read
  • Published

    11/2/2022
  • Tags

    MLOps
  • Author

    Rosie Bennett

I joined Mystic.ai in March 2022 and have been fully immersed in the emerging world of machine learning operations (MLOps) and Generative AI ever since. This article is my take on why 2022 was a pivotal year for widening participation in machine learning, and the consequential accessibility of applied artificial intelligence. I’m hesitating to use the word democratisation because I think it brings it’s own complexities - and is, in any case - much overused in the keynote speeches from big tech companies.

Photo credit: UCF Centre for Research in Computer Vision

AI development demands a lot of resources, including expert-level knowledge and machine learning computing capabilities.  With this in mind, the three levers that are key to driving broader adoption are: 

  1. Low code tools and infrastructure;
  2. Access to Foundation Models;
  3. And finally affordable compute, preferably through creating efficiencies in hardware and server architecture rather than racing to the bottom on the price of bare-metal cloud compute.

Tools, Infrastructure and API's

For the majority of organisations, delivering and integrating AI solutions is expensive and it takes specialists to do it well.  Machine learning Operations (or MLops) is a set of practices that aim to interconnect and automate each step in the machine learning lifecycle; from integrating with model generation orchestration, and deployment, through to health and diagnostics.

Running data science and machine learning code as web services behind APIs as part of this infrastructure can fast-track the practice of deploying AI in production. Using APIs as a clean interface between the models and the applications that use them allows for slick product development and re-usability of developed models in lots of applications.

Combined with a good tooling and compatibility with the broader software stack, it also opens to the door to experimentation and wider participation.

Algorithms need frequent updates, while the software applications that run them needs to be stable, reliable and robust. Separation means that AI practitioners can focus on building models or deploying open source ones without worrying about the infrastructure.

Believe the hype

Gartner identified the advances in machine learning operations, computer vision, chatbots and ‘Edge’ AI as the key drivers for adoption in 2021. However it’s  Generative AI and Foundation Models that they identify as being on the rise as the 'innovation triggers' in 2022.

The science and application of these models is, of course, not new. However, access and usage have mostly been the preserve of the large tech companies and research organisations that created them.

Foundation models are infamously difficult to train and eye-wateringly expensive to produce.  

Recent advancements with NLP have been a few years in the making, starting in 2018 with the launch of two massive deep learning models: GPT by Open AI, and BERT.  Since then things have advanced by leaps and bounds in terms of the capabilities of the models, but the real game-changer has been in the audacious move to open-source. Everything moved up a gear on 11 June 2020 when OpenAI announced that users could request access to its user-friendly GPT-3 API.

And things have escalated from there; in April this year a collaboration of machine learning engineers(Boris Dayma and Pedro Cuenca) released DALL-E Mini (now known as Craiyon), an open-source model inspired by OpenAI's tech to create images from text prompts. The model is trained by looking at millions of images from the internet with their associated captions.

Four months later, a [relatively stealth] early stage company Stability AI stole the limelight by announcing the release of Stable Diffusion as a text-to-image generator creating stunning art within seconds. It was the first time that anyone with access to a web browser could truly experiment with creating realistic images and art from a text description in natural language - and then go on to commercially use the API in production. It can run on consumer GPUs with a breakthrough in speed and quality [try it out for yourself at Playgrounds.ai].

Play with Stable Diffusion at Playgrounds.ai

The net results of using these zero-shot models can be astonishing - and the hype has quickly followed -  Stability AI is so hot that it became a unicorn in its seed round on 17 Oct 22. And we are seeing iterations on the theme each week as we move into the next frontiers of text-to-video and text-to-3D; with Google and Meta going head to head; it’s certainly a thrilling time to be in the sector!

But hold your horses!

But, there's a BUT! As practitioners in the space we are all too aware of the limitations.. and that there’s still a long way to go with the vanilla versions of the foundation models. Inspirobot is a crass but prime example; ask it for advice and it will confidently share a flawed wisdom. As a data scientist told me recently, these are tools to augment human creativity -  not for doing maths. If you want 100% accuracy use a spreadsheet.

Or, use an app. And this is how the new wave of AI-first innovators and entrepreneurs are approaching the opportunity - and they are doing it at astonishing speed.

Current market trends are indicating that delivering AI products and services are, like houses, often easier to build from the bottom up rather than retrofit.

Make way for the disrupters

This new gold rush is based on the emerging breed of 'prompt engineers' and savvy developers who can super charge the foundation models by adding layers of proprietary data, relevance and reliability. The result is a fast growing cohort of powerful, bespoke applications in a wave of disruption to every content and data vertical.

And we know. Because many of these developers, startups and emerging enterprises are using our toolset and hopping daily onto our Pipeline discord server to chat. 

Which brings me on to my final point about compute. As well as advances in hardware optimisation, the option to go serverless can make AI deployment more affordable and, therefore, accessible.  A traditional approach to ML hosting involves dedicating high end resources to singular models, and ring-fencing expensive hardware (GPU's) for the purposes of training and 'fine-tuning' the models.

This can lead to under utilisation of resources and wasted uptime - which is not good for the budget (or the planet). Going serverless allows a practioner to deploy models directly into the cloud via an API where the compute will be devolved across a cluster of shared GPU’s and is inherently scalable both up and down by design.

What's next?

We have already seen the first AI-first unicorn, Jasper.ai and we can be sure that the next contenders are already warming up as developer side-hustles, climbing the rankings on Producthunt or waiting in the lobby at Y-combinator (see the winter 2022 AI batch here).

I've joined a fantastic team who are on a mission to empower the next generation of AI-first enterprises - we've come a long way already but, in terms of the future potential of the market, things are just getting started!

ABOUT PIPELINE.AI

Pipeline AI makes it easy to work with ML models and to deploy AI at scale. The self-serve platform provides a fast pay-as-you-go API to run pretrained or proprietory models in production. If you are looking to deploy a large product and would like to sign up as an Enterprise customer please get in touch.

Follow us on Twitter and Linkedin.