What does Hugging Face and Microsoft’s collaboration mean for applied AI

This article is part of our series exploring Artificial Intelligence Business.

Last week, Hugging Face announced a new product in collaboration with Microsoft called Embrace face endpoints on Azurewhich allows users to set up and run thousands of machine learning models on Microsoft’s cloud platform.

Having started as a chatbot app, Hugging Face has made its name as a hub for transformer modelsa kind of deep learning architecture that has been behind many recent advances in artificial intelligence, including large language models such as OpenAI GPT-3 The DeepMind . protein folding model Alpha Fold.

human greetings

Subscribe now for a weekly summary of our favorite AI stories

Big tech companies like Google, Facebook, and Microsoft have been using transformer models for several years. But the past couple of years have seen a growing interest in switches among smaller companies, including many that don’t have in-house machine learning talent.

This is a great opportunity for companies like Hugging Face, whose vision is to become GitHub for machine learning. The company recently insured it $100 million Series C valuation of $2 billion. The company wants to offer a wide range of machine learning services, including ready-made transformer models.

However, creating a business around switches presents challenges that favor big tech companies and put companies like Hugging Face at a disadvantage. Hugging Face’s collaboration with Microsoft could be the beginning of market consolidation and a potential acquisition in the future.

https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2022/05/transformer-neural-network.jpg?resize=696%2C435&ssl=1

Transformer models can perform many tasks, including text classification, summarization, and generation; The answer to the question; Translation; writing Program source code; Speech to text converter. Recently, transducers have also moved into other fields, such as drug research and computer vision.

One of the main advantages of transformer models is their ability to expand. Recent years have shown that the performance of transformers is growing as they are scaled up and trained on larger data sets. However, training and operating large transformers is very difficult and expensive. a A recent paper from Facebook It presents some behind-the-scenes challenges for training very large language models. While not all converters are as great as OpenAI’s GPT-3 and Facebook’s OPT-175B, they are nonetheless tricky to get them right.

Hugging Face provides a large repertoire of pre-trained ML models to ease the burden of transformer deployment. Developers can download the adapters directly from the Hugging Face library and run them on their servers.

Pre-trained models are great for experimenting and fine-tuning transformers for downstream applications. However, when it comes to applying ML models to real products, developers must consider many other parameters, including integration costs, infrastructure, scaling, and retraining. If not configured correctly, the inverters can be costly to operate, which can have a significant impact on the product’s business model.

Therefore, while transformers are very useful, many of the organizations that would benefit from them do not have the talent and resources to train or operate them in a cost-effective manner.

Embrace face endpoints on Azure

https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2022/05/Hugging-Face-Endpoints-on-Azure.jpg?resize=696%2C424&ssl=1

An alternative to running your own switch is to use ML models hosted on cloud servers. In recent years, many companies have launched services that make it possible to use machine learning models through API calls without having to know how to train, configure, and deploy machine learning models.

Two years ago, Hugging Face launched its own ML service, called the Inference API, which provides access to thousands of pre-trained models (mostly adapters) rather than the limited options of other services. Customers can rent the Inference API based on shared resources or set up Hugging Face and maintain their infrastructure. Hosted models make ML accessible to a wide variety of organizations, just as cloud hosting services have brought blogs and websites to organizations that haven’t been able to set up their own web servers.

So why did Hugging Face switch to Microsoft? Turning hosted ML into a profitable business is quite complex (see, for example, OpenAI’s GPT-3 API). Companies like Google, Facebook, and Microsoft have invested billions of dollars in creating specialized processors and servers that reduce the operating costs of switches and other machine learning models.

Hugging Face Endpoints takes advantage of key Azure features, including flexible scaling options, global availability, and security standards. The interface is easy to use and takes only a few clicks to set up and configure a consumption model to scale for different order volumes. Microsoft has already built a massive infrastructure to run the switches, which will likely reduce the costs of submitting ML models for Hugging Face. (Currently in beta, Hugging Face Endpoints is free, and users only pay for Azure infrastructure. The company plans a usage-based pricing model when the product becomes publicly available.)

Most importantly, Microsoft has access to a significant share of the market targeted by Hugging Face.

according to Hugging Face . Blog“With 95% of Fortune 500 companies trusting Azure for their business, it made perfect sense for Hugging Face and Microsoft to tackle this issue together.”

Many companies find it frustrating to register and pay for different cloud services. Integrating Hugging Face’s Hosted ML product with Microsoft Azure ML reduces barriers to delivering value for its products and expands the company’s market reach.

Image credit: 123RF (with adjustments)

https://i0.wp.com/bdtechtalks.com/wp-content/uploads/2022/05/microsoft-hugging-face-partnership.jpg?resize=696%2C435&ssl=1

Hugging Face Endpoints could be the start of many product integrations in the future, since Microsoft’s suite of tools (Outlook, Word, Excel, Teams, etc.) Company executives have already hinted at plans to expand their partnership with Microsoft.

“This is the beginning of the Hugging Face and Azure collaboration that we are announcing today as we work together to make our solutions, our machine learning platform, and our models accessible and easy to work with on Azure. Hugging Face Endpoints on Azure is the number one solution we have on the Azure Marketplace, but we are working hard To bring more Hugging Face solutions to Azure,” said Jeff Bodder, Hugging Face Product Manager, Take Crunch. “We have confessed [the] Roadblocks to deploying machine learning solutions into production [emphasis mine] And it began collaborating with Microsoft to resolve the growing interest in a simple, ready-to-use solution. “

This could be very useful for Hugging Face, which must find a business model that justifies its $2 billion valuation.

But Hugging Face’s collaboration with Microsoft won’t be without tradeoffs.

Earlier this month, in Interview with ForbesClement Delange, Hugging Face co-founder and CEO, said he has rejected several “meaningful acquisition offers” and will not sell his company, as GitHub has done for Microsoft.

However, the direction his company is now taking will make its business model increasingly based on Azure (again, OpenAI provides a good example of where things are going) and possibly reduce the market for its standalone Inference API.

Without Microsoft market access, Hugging Face’s product(s) would have greater adoption barriers, lower value proposition, and higher costs (the “barriers” mentioned above). And Microsoft can always release a competitor product that is better, faster, and cheaper.

If Microsoft’s takeover bid comes to the test, Hugging Face will have to make a tough choice. This is also a reminder of where the market for large language models and applied machine learning is headed.

In the comments posted to the Hugging Face blog, Delangue said, “Hugging Face’s mission is to democratize quality machine learning. We strive to help every developer and organization build high-quality ML-powered apps that have a positive impact on society and businesses.”

In fact, products like Hugging Face Endpoints will democratize machine learning for developers.

But Transformers and large language models are also inherently undemocratic It will give a lot of power to the few companies that have the resources to build and run them. While more people will be able to build products on top of switches powered by Azure, Microsoft will continue to secure and expand market share in what appears to be the future of applied machine learning. Companies like Hugging Face have to suffer the consequences.

This article was originally published by Ben Dixon on Tech Talks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the dark effects of new technology, and what we need to look for. You can read the original article over here.