From Digital Age to Nano Age. WorldWide.

Tag: training

Robotic Automations

OpenAI says it's building a tool to let content creators 'opt out' of AI training | TechCrunch


OpenAI says it’s developing a tool to let creators better control how their content is used in AI.

Called Media Manager, the tool — once it’s released — will allow creators and content owners to identify their works to OpenAI and specify how they want those works to be included or excluded from AI research and training. The goal is to have the tool in place by 2025, OpenAI says, as the company works with creators, content owners and regulators toward a common standard.

“This will require cutting-edge machine learning research to build a first-ever tool of its kind to help us identify copyrighted text, images, audio and video across multiple sources and reflect creator preferences,” OpenAI writes in a blog post. “Over time, we plan to introduce additional choices and features.”


Software Development in Sri Lanka

Robotic Automations

Meta unveils its newest custom AI chip as it races to catch up | TechCrunch


Meta, hell-bent on catching up to rivals in the generative AI space, is spending billions on its own AI efforts. A portion of those billions is going toward recruiting AI researchers. But an even larger chunk is being spent developing hardware, specifically chips to run and train Meta’s AI models.

Meta unveiled the newest fruit of its chip dev efforts today, conspicuously a day after Intel announced its latest AI accelerator hardware. Called the “next-gen” Meta Training and Inference Accelerator (MTIA), the successor to last year’s MTIA v1, the chip runs models including for ranking and recommending display ads on Meta’s properties (e.g. Facebook).

Compared to MTIA v1, which was built on a 7nm process, the next-gen MTIA is 5nm. (In chip manufacturing, “process” refers to the size of the smallest component that can be built on the chip.) The next-gen MTIA is a physically larger design, packed with more processing cores than its predecessor. And while it consumes more power — 90W versus 25W — it also boasts more internal memory (128MB versus 64MB) and runs at a higher average clock speed (1.35GHz up from 800MHz).

Meta says the next-gen MTIA is currently live in 16 of its data center regions and delivering up to 3x overall better performance compared to MTIA v1. If that “3x” claim sounds a bit vague, you’re not wrong — we thought so too. But Meta would only volunteer that the figure came from testing the performance of “four key models” across both chips.

“Because we control the whole stack, we can achieve greater efficiency compared to commercially available GPUs,” Meta writes in a blog post shared with TechCrunch.

Meta’s hardware showcase — which comes a mere 24 hours after a press briefing on the company’s various ongoing generative AI initiatives — is unusual for several reasons.

One, Meta reveals in the blog post that it’s not using the next-gen MTIA for generative AI training workloads at the moment, although the company claims it has “several programs underway” exploring this. Two, Meta admits that the next-gen MTIA won’t replace GPUs for running or training models — but instead will complement them.

Reading between the lines, Meta is moving slowly — perhaps more slowly than it’d like.

Meta’s AI teams are almost certainly under pressure to cut costs. The company’s set to spend an estimated $18 billion by the end of 2024 on GPUs for training and running generative AI models, and — with training costs for cutting-edge generative models ranging in the tens of millions of dollars — in-house hardware presents an attractive alternative.

And while Meta’s hardware drags, rivals are pulling ahead, much to the consternation of Meta’s leadership, I’d suspect.

Google this week made its fifth-generation custom chip for training AI models, TPU v5p, generally available to Google Cloud customers, and revealed its first dedicated chip for running models, Axion. Amazon has several custom AI chip families under its belt. And Microsoft last year jumped into the fray with the Azure Maia AI Accelerator and the Azure Cobalt 100 CPU.

In the blog post, Meta says it took fewer than nine months to “go from first silicon to production models” of the next-gen MTIA, which to be fair is shorter than the typical window between Google TPUs. But Meta has a lot of catching up to do if it hopes to achieve a measure of independence from third-party GPUs — and match its stiff competition.


Software Development in Sri Lanka

Robotic Automations

OpenAI expands its custom model training program | TechCrunch


OpenAI is expanding a program, Custom Model, to help enterprise customers develop tailored generative AI models using its technology for specific use cases, domains and applications.

Custom Model launched last year at OpenAI’s inaugural developer conference, DevDay, offering companies an opportunity to work with a group of dedicated OpenAI researchers to train and optimize models for specific domains. “Dozens” of customers have enrolled in Custom Model since. But OpenAI says that, in working with this initial crop of users, it’s come to realize the need to grow the program to further “maximize performance.”

Hence assisted fine-tuning and custom-trained models.

Assisted fine-tuning, a new component of the Custom Model program, leverages techniques beyond fine-tuning — such as “additional hyperparameters and various parameter efficient fine-tuning methods at a larger scale,” in OpenAI’s words — to enable organizations to set up data training pipelines, evaluation systems and other supporting infrastructure toward bolstering model performance on particular tasks.

As for custom-trained models, they’re custom models built with OpenAI — using OpenAI’s base models and tools (e.g. GPT-4) — for customers that “need to more deeply fine-tune their models” or “imbue new, domain-specific knowledge,” OpenAI says.

OpenAI gives the example of SK Telecom, the Korean telecommunications giant, who worked with OpenAI to fine-tune GPT-4 to improve its performance in “telecom-related conversations” in Korean. Another customer, Harvey — which is building AI-powered legal tools with support from the OpenAI Startup Fund, OpenAI’s AI-focused venture arm — teamed up with OpenAI to create a custom model for case law that incorporated hundreds of millions of words of legal text and feedback from licensed expert attorneys.

“We believe that in the future, the vast majority of organizations will develop customized models that are personalized to their industry, business, or use case,” OpenAI writes in a blog post. “With a variety of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impact from their AI implementations.”

Image Credits: OpenAI

OpenAI is flying high, reportedly nearing an astounding $2 billion in annualized revenue. But there’s surely internal pressure to maintain pace, particularly as the company plots a $100 billion data center co-developed with Microsoft (if reports are to be believed). The cost of training and serving flagship generative AI models isn’t coming down anytime soon after all, and consulting work like custom model training might just be the thing to keep revenue growing while OpenAI plots its next moves.

Fine-tuned and custom models could also lessen the strain on OpenAI’s model serving infrastructure. Tailored models are in many cases smaller and more performant than their general-purpose counterparts, and — as the demand for generative AI reaches a fever pitch — no doubt present an attractive solution for a historically compute-capacity-challenged OpenAI.

Alongside the expanded Custom Model program and custom model building, OpenAI today unveiled new model fine-tuning features for developers working with GPT-3.5, including a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases) and enhancements to tooling. Mum’s the word on fine-tuning for GPT-4, however, which launched in early access during DevDay.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber