From Digital Age to Nano Age. WorldWide.

Tag: anthropic

Robotic Automations

Anthropic hires Instagram co-founder as head of product | TechCrunch


Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the company’s first chief product officer. As CPO, Krieger will oversee Anthropic’s product engineering, management, and design efforts, Anthropic says, as the company works to expand […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Anthropic is expanding to Europe and raising more money | TechCrunch


On the heels of OpenAI announcing the latest iteration of its GPT large language model, its biggest rival in generative AI in the U.S. announced an expansion of its own. Anthropic said Monday that Claude, its AI assistant, is now live in Europe with support for “multiple languages,” including French, German, Italian and Spanish across […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Anthropic now lets kids use its AI tech — within limits | TechCrunch


AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official blog Friday, Anthropic will begin letting teens and preteens use third-party apps (but not its own apps, necessarily) powered by its AI models so long […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Anthropic launches new iPhone app, premium plan for businesses | TechCrunch


Anthropic, one of the world’s best-funded generative AI startups with $7.6 billion in the bank, is launching a new paid plan aimed at enterprises, including those in highly regulated industries like healthcare, finance and legal, as well as a new iOS app.

Team, the enterprise plan, gives customers higher-priority access to Anthropic’s Claude 3 family of generative AI models plus additional admin and user management controls.

“Anthropic introduced the Team plan now in response to growing demand from enterprise customers who want to deploy Claude’s advanced AI capabilities across their organizations,” Scott White, product lead at Anthropic, told TechCrunch. “The Team plan is designed for businesses of all sizes and industries that want to give their employees access to Claude’s language understanding and generation capabilities in a controlled and trusted environment.”

The Team plan — which joins Anthropic’s individual premium plan, Pro — delivers “greater usage per user” compared to Pro, enabling users to “significantly increase” the number of chats that they can have with Claude. (We’ve asked Anthropic for figures.) Team customers get a 200,000-token (~150,000-word) context window as well as all the advantages of Pro, like early access to new features.

Image Credits: Anthropic

Context window, or context, refers to input data (e.g. text) that a model considers before generating output (e.g. more text). Models with small context windows tend to forget the content of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

Team also brings with it new toggles to control billing and user management. And in the coming weeks, it’ll gain collaboration features including citations to verify AI-generated claims (models including Anthropic’s tend to hallucinate), integrations with data repos like codebases and customer relationship management platforms (e.g. Salesforce) and — perhaps most intriguing to this writer — a canvas to work with team members on AI-generated docs and projects, Anthropic says.

In the nearer term, Team customers will be able to leverage tool use capabilities for Claude 3, which recently entered open beta. This allows users to equip Claude 3 with custom tools to perform a wider range of tasks, like getting a firm’s current stock price or the local weather report, similar to OpenAI’s GPTs.

“By enabling businesses to deeply integrate Claude into their collaborative workflows, the Team plan positions Anthropic to capture significant enterprise market share as more companies move from AI experimentation to full-scale deployment in pursuit of transformative business outcomes,” White said. “In 2023, customers rapidly experimented with AI, and now in 2024, the focus has shifted to identifying and scaling applications that deliver concrete business value.”

Anthropic talks a big game, but it still might take a substantial effort on its part to get businesses on board.

According to a recent Gartner survey, 49% of companies said that it’s difficult to estimate and demonstrate the value of AI projects, making them a tough sell internally. A separate poll from McKinsey found that 66% of executives believe that generative AI is years away from generating substantive business results.

Image Credits: Anthropic

Yet corporate spending on generative AI is forecasted to be enormous. IDC expects that it’ll reach $15.1 billion in 2027, growing nearly eightfold from its total in 2023.

That’s probably generative AI vendors, most notably OpenAI, are ramping up their enterprise-focused efforts.

OpenAI recently said that it had more than 600,000 users signed up for the enterprise tier of its generative AI platform ChatGPT, ChatGPT Enterprise. And it’s introduced a slew of tools aimed at satisfying corporate compliance and governance requirements, like a new user interface to compare model performance and quality.

Anthropic is competitively pricing its Team plan: $30 per user per month billed monthly, with a minimum of five seats. OpenAI doesn’t publish the price of ChatGPT Enterprise, but users on Reddit report being quoted anywhere from $30 per user per month for 120 users to $60 per user per month for 250 users. 

“Anthropic’s Team plan is competitive and affordable considering the value it offers organizations,” White said. “The per-user model is straightforward, allowing businesses to start small and expand gradually. This structure supports Anthropic’s growth and stability while enabling enterprises to strategically leverage AI.”

It undoubtedly helps that Anthropic’s launching Team from a position of strength.

Amazon in March completed its $4 billion investment in Anthropic (following a $2 billion Google investment), and the company is reportedly on track to generate more than $850 million in annualized revenue by the end of 2024 — a 70% increase from an earlier projection. Anthropic may see Team as its logical next path to expansion. But at least right now it seems Anthropic can afford to let Team grow organically as it attempts to convince holdout businesses its generative AI is better than the rest.

An Anthropic iOS app

Anthropic’s other piece of news Wednesday is that it’s launching an iOS app. Given that the company’s conspicuously been hiring iOS engineers over the past few months, this comes as no great surprise.

The iOS app provides access to Claude 3, including free access as well as upgraded Pro and Team access. It syncs with Anthropic’s client on the web, and it taps Claude 3’s vision capabilities to offer real-time analysis for uploaded and saved images. For example, users can upload a screenshot of charts from a presentation and ask Claude to summarize them.

Image Credits: Anthropic

“By offering the same functionality as the web version, including chat history syncing and photo upload capabilities, the iOS app aims to make Claude a convenient and integrated part of users’ daily lives, both for personal and professional use,” White said. “It complements the web interface and API offerings, providing another avenue for users to engage with the AI assistant. As we continue to develop and refine our technologies, we’ll continue to explore new ways to deliver value to users across various platforms and use cases, including mobile app development and functionality.”


Software Development in Sri Lanka

Robotic Automations

Stainless is helping OpenAI, Anthropic and others build SDKs for their APIs | TechCrunch


Besides a focus on generative AI, what do AI startups like OpenAI, Anthropic and Together AI share in common? They use Stainless, a platform created by ex-Stripe staffer Alex Rattray, to generate SDKs for their APIs.

Rattray, who studied economics at the University of Pennsylvania, has been building things for as long as he can remember, from an underground newspaper in high school to a bike-share program in college. Rattray picked up programming on the side while at UPenn, which led to a job at Stripe as an engineer on the developer platform team.

At Stripe, Rattray helped to revamp API documentation and launch the system that powers Stripe’s API client SDK. It’s while working on those projects Rattray observed there wasn’t an easy way for companies, including Stripe, to build SDKs for their APIs at scale.

“Handwriting the SDKs couldn’t scale,” he told TechCrunch. “Today, every API designer has to settle a million and one ‘bikeshed’ questions all over again, and painstakingly enforce consistency around these decisions across their API.”

Now, you might be wondering, why would a company need an SDK if it offers an API? APIs are simply protocols, enabling software components to communicate with each other and transfer data. SDKs, on the other hand, offer a set of software-crafting tools that plug into APIs. Without an SDK to accompany an API, API users are forced to read API docs and build everything themselves, which isn’t the best experience.

Rattray’s solution is Stainless, which takes in an API spec and generates SDKs in a range of programming languages including Python, TypeScript, Kotlin, Go and Java. As APIs evolve and change, Stainless’ platform pushes those updates with options for versioning and publishing changelogs.

“API companies today have a team of several people building libraries in each new language to connect to their API,” Rattray said. “These libraries inevitably become inconsistent, fall out of date and require constant changes from specialist engineers. Stainless fixes that problem by generating them via code.”

Stainless isn’t the only API-to-SDK generator out there. There’s LibLab and Speakeasy, to name a couple, plus longstanding open source projects such as the OpenAPI Generator.

Stainless, however, delivers more “polish” than most others, Rattray said, thanks partly to its use of generative AI.

“Stainless uses generative AI to produce an initial ‘Stainless config’ for customers, which is then up to them to fine-tune to their API,” he explained. “This is particularly valuable for AI companies, whose huge user bases includes many novice developers trying to integrate with complex features like chat streaming and tools.”

Perhaps that’s what attracted customers like OpenAI, Anthropic and Together AI, along with Lithic, LangChain, Orb, Modern Treasury and Cloudflare. Stainless has “dozens” of paying clients in its beta, Rattray said, and some of the SDKs it’s generated, including OpenAI’s Python SDK, are getting millions of downloads per week.

“If your company wants to be a platform, your API is the bedrock of that,” he said. “Great SDKs for your API drive faster integration, broader feature adoption, quicker upgrades and trust in your engineering quality.”

Most customers are paying for Stainless’ enterprise tier, which comes with additional white-glove services and AI-specific functionality. Publishing a single SDK with Stainless is free. But companies have to fork over between $250 per month and $30,000 per year for multiple SDKs across multiple programming languages.

Rattray bootstrapped Stainless “with revenue from day one,” he said, adding that the company could be profitable as soon as this year; annual recurring revenue is hovering around $1 million. But Rattray opted instead to take on outside investment to build new product lines.

Stainless recently closed a $3.5 million seed round with participation from Sequoia and The General Partnership.

“Across the tech ecosystem, Stainless stands out as a beacon that elevates the developer experience, rivaling the high standard once set by Stripe,” said Anthony Kline, partner at The General Partnership. “As APIs continue to be the core building blocks of integrating services like LLMs into applications, Alex’s first-hand experience pioneering Stripe’s API codegen system uniquely positions him to craft Stainless into the quintessential platform for seamless, high-quality API interactions.”

Stainless has a 10-person team based in New York. Rattray expects headcount to grow to 15 or 20 by the end of the year.


Software Development in Sri Lanka

Robotic Automations

UK probes Amazon and Microsoft over AI partnerships with Mistral, Anthropic, and Inflection | TechCrunch


The U.K.’s Competition and Markets Authority (CMA) is launching preliminary enquiries into whether the close-knit tie-ups and hiring practices involving Microsoft, Amazon and a trio of AI startup falls within the scope of its merger rules — and whether the arrangements could impact competition in the U.K. market.

The announcement comes amid growing scrutiny of Big Tech’s fresh approach to M&A in the world of artificial intelligence (AI), where the so-called “quasi-merger” has emerged as flavor of the day as a means of — apparently — bypassing regulatory oversight.

Microsoft’s investment in, and close partnership with, ChatGPT-maker OpenAI attracted the CMA’s scrutiny late last year, with the regulator launching a formal “invitation to comment,” aimed at relevant stakeholders in the AI and business spheres. Since then, Microsoft hired the core team behind Inflection AI, a U.S.-based OpenAI rival it had previously invested in, and earlier this month Microsoft launched a new London AI hub fronted by former Inflection and DeepMind scientist Jordan Hoffmann.

Elsewhere, Microsoft also recently invested in Mistral AI, a French AI startup working on foundational models that could be construed as rivalling OpenAI.

And then there’s Amazon, which recently completed its $4 billion investment in Anthropic — another U.S.-based AI company working on large language models.

Collectively, these latest deals

The CMA’s executive director of mergers, Joel Bamford, said that it’s merely inviting comments from relevant parties, as it assesses whether these various partnerships are tantamount to mergers, and whether it might impact competition in the U.K.’s fast-growing AI industry.

“Foundation models have the potential to fundamentally impact the way we all live and work, including products and services across so many U.K. sectors – healthcare, energy, transport, finance and more,” Bamford said in a statement. “So open, fair, and effective competition in foundation model markets is critical to making sure the full benefits of this transformation are realised by people and businesses in the UK, as well as our wider economy where technology has a huge role to play in growth and productivity.”

This is a development story, refresh for updates.


Software Development in Sri Lanka

Robotic Automations

Watch: How Anthropic found a trick to get AI to give you answers it's not supposed to


If you build it, people will try to break it. Sometimes even the people building stuff are the ones breaking it. Such is the case with Anthropic and its latest research which demonstrates an interesting vulnerability in current LLM technology. More or less if you keep at a question, you can break guardrails and wind up with large language models telling you stuff that they are designed not to. Like how to build a bomb.

Of course given progress in open-source AI technology, you can spin up your own LLM locally and just ask it whatever you want, but for more consumer-grade stuff this is an issue worth pondering. What’s fun about AI today is the quick pace it is advancing, and how well — or not — we’re doing as a species to better understand what we’re building.

If you’ll allow me the thought, I wonder if we’re going to see more questions and issues of the type that Anthropic outlines as LLMs and other new AI model types get smarter, and larger. Which is perhaps repeating myself. But the closer we get to more generalized AI intelligence, the more it should resemble a thinking entity, and not a computer that we can program, right? If so, we might have a harder time nailing down edge cases to the point when that work becomes unfeasible? Anyway, let’s talk about what Anthropic recently shared.


Software Development in Sri Lanka

Robotic Automations

Why Amazon is betting $4B on Anthropic’s AI success


The current AI wave is a never-ending barrage of news items. To understand what I mean, ask yourself how long you spent considering the fact that Amazon put another $2.75 billion into Anthropic AI last week. Right?

We’ve become inured to the capital influx that is now common in AI, even as the headline numbers get bigger. Sure, Amazon is slinging cash at Anthropic, but single-digit billions are chump change compared to what some companies have planned. Hell, even smaller tech companies — compared to the true giants — are spending to stay on the cutting edge.

So as we digest Amazon’s latest, let’s do a quick rewind through some of the largest AI rounds in the last few quarters and ask ourselves why some Big Tech corps get busy with their checkbooks.


Software Development in Sri Lanka

Robotic Automations

Anthropic researchers wear down AI ethics with repeated questions | TechCrunch


How do you get an AI to answer a question it’s not supposed to? There are many such “jailbreak” techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first.

They call the approach “many-shot jailbreaking” and have both written a paper about it and also informed their peers in the AI community about it so it can be mitigated.

The vulnerability is a new one, resulting from the increased “context window” of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic’s researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it’s the hundredth question.

But in an unexpected extension of this “in-context learning,” as it’s called, the models also get “better” at replying to inappropriate questions. So if you ask it to build a bomb right away, it will refuse. But if the prompt shows it answering 99 other questions of lesser harmfulness and then asks it to build a bomb … it’s a lot more likely to comply.

(Update: I misunderstood the research initially as actually having the model answer the series of priming prompts, but the questions and answers are written into the prompt itself. This makes more sense, and I’ve updated the post to reflect it.)

Image Credits: Anthropic

Why does this work? No one really understands what goes on in the tangled mess of weights that is an LLM, but clearly there is some mechanism that allows it to home in on what the user wants, as evidenced by the content in the context window or prompt itself. If the user wants trivia, it seems to gradually activate more latent trivia power as you ask dozens of questions. And for whatever reason, the same thing happens with users asking for dozens of inappropriate answers — though you have to supply the answers as well as the questions in order to create the effect.

The team already informed its peers and indeed competitors about this attack, something it hopes will “foster a culture where exploits like this are openly shared among LLM providers and researchers.”

For their own mitigation, they found that although limiting the context window helps, it also has a negative effect on the model’s performance. Can’t have that — so they are working on classifying and contextualizing queries before they go to the model. Of course, that just makes it so you have a different model to fool … but at this stage, goalpost-moving in AI security is to be expected.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber