From Digital Age to Nano Age. WorldWide.

Tag: OpenAI

Robotic Automations

Musk's xAI shows there's more money on the sidelines for AI startups | TechCrunch


We’re off to an AI-heavy start to the week. OpenAI has a new deal with the Financial Times that caught our eye. Sure, it’s another content licensing deal, but there appears to be a bit more in the tie-up than just content flowing one way, and money the other.

On this early-week episode of Equity, we also dug into the xAI news that TechCrunch broke recently; namely that Musk’s AI enterprise is not looking to raise $3 billion on a $15 billion valuation. No, it’s now looking for $6 billion at an $18 billion valuation. That’s a lot of capital.

But there was even more to chat about, including the EU handing Apple even more bad news in the form of placing iPadOS under its DMA rules that should force third-party app stores on the tablet line in time. And Tesla got some good news in China, though just how impactful it will prove is not 100% certain at this juncture.

And to close out, the Times has a fascinating look at pace at which venture capitalists are putting money into AI startups. Given the ability of OpenAI to land big deals with Microsoft money, I wonder if it is enough?

Equity is TechCrunch’s flagship podcast and posts every Monday, Wednesday and Friday, and you can subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts.

You also can follow Equity on X and Threads, at @EquityPod.

For the full interview transcript, for those who prefer reading over listening, read on, or check out our full archive of episodes over at Simplecast.




Software Development in Sri Lanka

Robotic Automations

OpenAI inks strategic tie-up with UK's Financial Times, including content use | TechCrunch


OpenAI, maker of the viral AI chatbot ChatGPT, has netted another news licensing deal in Europe, adding London’s Financial Times to a growing list of publishers it’s paying for content access.

As with earlier OpenAI’s publisher licensing deals, financial terms of the arrangement are not being made public.

The latest deal looks a touch cozier than other recent OpenAI publisher tie-ups — such as with German giant Axel Springer or with the AP, Le Monde and Prisa Media in France and Spain respectively — as the pair are referring to the arrangement as a “strategic partnership and licensing agreement”. (Though Le Monde’s CEO also referred to the “partnership” it announced with OpenAI in March as a “strategic move”.)

However we understand it’s a non-exclusive licensing arrangement — and OpenAI is not taking any kind of stake in the FT Group.

On the content licensing front, the pair said the deal covers OpenAI use of the FT’s content for training AI models and, where appropriate, for displaying in generative AI responses produced by tools like ChatGPT, which looks much the same as its other publisher deals.

The strategic element appears to center on the FT boosting its understanding of generative AI, especially as a content discovery tool, and what’s being couched as a collaboration aimed at developing “new AI products and features for FT readers” — suggesting the news publisher is eager to expand its use of the AI technology more generally.

“Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries,” the FT wrote in a press release.

The publisher also noted that it became a customer of OpenAI’s ChatGPT Enterprise product earlier this year. It goes on to suggest it wants to explore ways to deepen its use of AI, while expressing caution over the reliability of automated outputs and potential risks to reader trust.

“This is an important agreement in a number of respects,” wrote FT Group CEO John Ridding in a statement. “It recognises the value of our award-winning journalism and will give us early insights into how content is surfaced through AI.” 

He went on, “Apart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material. OpenAI understands the importance of transparency, attribution, and compensation — all essential for us. At the same time, it’s clearly in the interests of users that these products contain reliable sources.”

Large language models (LLMs) such as OpenAI’s GPT, which powers the ChatGPT chatbot, are notorious for their capacity to fabricate information or “hallucinate.” This is the polar opposite of journalism, where reporters work to verify that the information they provide is as accurate as possible.

So it’s actually not surprising that OpenAI’s early moves toward licensing content for model training have centered on journalism. The AI giant may hope this will help it fix the “hallucination” problem. (A line in the PR suggests the partnership will “help improve [OpenAI’s] models’ usefulness by learning from FT journalism.”)

There’s another major motivating factor in play here too, though: Legal liability around copyright.

Last December the New York Times announced it’s suing OpenAI, alleging that its copyrighted content was used by the AI giant to train models without a license. OpenAI disputes that but one way to close down the risk of further lawsuits from news publishers, whose content was likely scraped off the public Internet (or otherwise harvested) to feed development of LLMs is to pay publishers for using their copyrighted content.

For their part, publishers stand to gain some cold hard cash from the content licensing. OpenAI told TechCrunch it has “around a dozen” publisher deals signed (or “imminent”), adding “many” more are in the works.

They could also, potentially, acquire some readers — such as if users of ChatGPT opt to click on citations that link to their content. However, generative AI could also cannibalize the use of search engines over time, diverting traffic away from news publishers’ sites. If that kind of disruption is coming down the pipe, some news publishers may feel a strategic advantage in developing closer relationships with the likes of OpenAI.

Getting involved with Big AI carries some reputational pitfalls for publishers, too.

Tech publisher CNET, which last year rushed to adopt generative AI as a content production tool — without making its use of the tech abundantly clear to readers — took further knocks to its reputation when journalists at Futurism found scores of errors in machine-written articles it had published.

The FT has a well-established reputation for producing quality journalism. So it will certainly be interesting to see how it further integrates generative AI into its products and/or newsroom processes.

Last month it also announced a GenAI tool for subscribers — which essentially shakes out to offering a natural language search option atop two decades of FT content (so, basically, it’s a value-add aimed at driving subscriptions for human-produced journalism). Additionally, in Europe legal uncertainty is clouding use of tools like ChatGPT over a raft of privacy law concerns.


Software Development in Sri Lanka

Robotic Automations

Creators of Sora-powered short explain AI-generated video's strengths and limitations | TechCrunch


OpenAI’s video generation tool Sora took the AI community by surprise in February with fluid, realistic video that seems miles ahead of competitors. But the carefully stage-managed debut left out a lot of details — details that have been filled in by a filmmaker given early access to create a short using Sora.

Shy Kids is a digital production team based in Toronto that was picked by OpenAI as one of a few to produce short films essentially for OpenAI promotional purposes, though they were given considerable creative freedom in creating “air head.” In an interview with visual effects news outlet fxguide, post-production artist Patrick Cederberg described “actually using Sora” as part of his work.

Perhaps the most important takeaway for most is simply this: While OpenAI’s post highlighting the shorts lets the reader assume they more or less emerged fully formed from Sora, the reality is that these were professional productions, complete with robust storyboarding, editing, color correction, and post work like rotoscoping and VFX. Just as Apple says “shot on iPhone” but doesn’t show the studio setup, professional lighting, and color work after the fact, the Sora post only talks about what it lets people do, not how they actually did it.

Cederberg’s interview is interesting and quite non-technical, so if you’re interested at all, head over to fxguide and read it. But here are some interesting nuggets about using Sora that tell us that, as impressive as it is, the model is perhaps less of a giant leap forward than we thought.

Control is still the thing that is the most desirable and also the most elusive at this point. … The closest we could get was just being hyper-descriptive in our prompts. Explaining wardrobe for characters, as well as the type of balloon, was our way around consistency because shot to shot / generation to generation, there isn’t the feature set in place yet for full control over consistency.

In other words, matters that are simple in traditional filmmaking, like choosing the color of a character’s clothing, take elaborate workarounds and checks in a generative system, because each shot is created independent of the others. That could obviously change, but it is certainly much more laborious at the moment.

Sora outputs had to be watched for unwanted elements as well: Cederberg described how the model would routinely generate a face on the balloon that the main character has for a head, or a string hanging down the front. These had to be removed in post, another time-consuming process, if they couldn’t get the prompt to exclude them.

Precise timing and movements of characters or the camera aren’t really possible: “There’s a little bit of temporal control about where these different actions happen in the actual generation, but it’s not precise … it’s kind of a shot in the dark,” said Cederberg.

For example, timing a gesture like a wave is a very approximate, suggestion-driven process, unlike manual animations. And a shot like a pan upward on the character’s body may or may not reflect what the filmmaker wants — so the team in this case rendered a shot composed in portrait orientation and did a crop pan in post. The generated clips were also often in slow motion for no particular reason.

Example of a shot as it came out of Sora and how it ended up in the short. Image Credits: Shy Kids

In fact, using the everyday language of filmmaking, like “panning right” or “tracking shot” were inconsistent in general, Cederberg said, which the team found pretty surprising.

“The researchers, before they approached artists to play with the tool, hadn’t really been thinking like filmmakers,” he said.

As a result, the team did hundreds of generations, each 10 to 20 seconds, and ended up using only a handful. Cederberg estimated the ratio at 300:1 — but of course we would probably all be surprised at the ratio on an ordinary shoot.

The team actually did a little behind-the-scenes video explaining some of the issues they ran into, if you’re curious. Like a lot of AI-adjacent content, the comments are pretty critical of the whole endeavor — though not quite as vituperative as the AI-assisted ad we saw pilloried recently.

The last interesting wrinkle pertains to copyright: If you ask Sora to give you a “Star Wars” clip, it will refuse. And if you try to get around it with “robed man with a laser sword on a retro-futuristic spaceship,” it will also refuse, as by some mechanism it recognizes what you’re trying to do. It also refused to do an “Aronofsky type shot” or a “Hitchcock zoom.”

On one hand, it makes perfect sense. But it does prompt the question: If Sora knows what these are, does that mean the model was trained on that content, the better to recognize that it is infringing? OpenAI, which keeps its training data cards close to the vest — to the point of absurdity, as with CTO Mira Murati’s interview with Joanna Stern — will almost certainly never tell us.

As for Sora and its use in filmmaking, it’s clearly a powerful and useful tool in its place, but its place is not “creating films out of whole cloth.” Yet. As another villain once famously said, “that comes later.”




Software Development in Sri Lanka

Robotic Automations

OpenAI Startup Fund quietly raises $15M | TechCrunch


The OpenAI Startup Fund, a venture fund related to — but technically separate from — OpenAI that invests in early-stage, typically AI-related companies across education, law and the sciences, has quietly closed a $15 million tranche.

According to a filing with the U.S. Securities and Exchange Commission, two unnamed investors contributed the $15 million in new cash on or around April 19. The paperwork was submitted on April 25, and mentions Ian Hathaway, the OpenAI Startup Fund’s manager and sole partner.

The capital was transferred to a legal entity called a special purpose vehicle, or SPV, associated with the OpenAI Startup Fund: OpenAI Startup Fund SPV II, L.P.

SPVs allow multiple investors to pool their resources and make an investment in a single company or fund. In the VC sector, they’re sometimes used to invest in startups that don’t fit a fund’s strategy or that fall outside a fund’s terms. SPVs can also be marketed to a wider range of non-institutional investors.

It’s the second such time the OpenAI Startup Fund has raised capital through an SPV — the first time being in February for a $10 million tranche.

The OpenAI Startup Fund, whose portfolio companies include legal tech startup Harvey, Ambiance Healthcare and humanoid robotics firm Figure AI, came under scrutiny last year after it was revealed that OpenAI CEO Sam Altman had long legally controlled the fund. While marketed like a standard corporate venture arm, Altman raised capital for the OpenAI Startup Fund from outside limited partners, including Microsoft (a close OpenAI partner and investor), and had the final say in the fund’s investments.

Neither OpenAI nor Altman had — or have — a financial interest in the OpenAI Startup Fund. But critics nonetheless argued that Altman’s ownership amounted to a conflict of interest; OpenAI claimed that the general partner structure was intended to be “temporary.”

In April, Altman transferred formal control of the OpenAI Startup Fund to Hathaway, previously an investor with the VC firm Haystack, who’d played a key role in managing the Startup Fund since 2021.

As of last year, the OpenAI Startup Fund — whose ventures also include an incubator program called Converge — had $175 million in commitments and held $325 million in gross net asset value. It’s backed well over a dozen startups including Descript, a collaborative multimedia editing platform valued at $553 million last year; language learning app Speak; AI-powered note-taking app Mem; and IDE platform Anysphere.

OpenAI hadn’t responded to TechCrunch’s request for comment as of publication time. We’ll update this post if we hear back.


Software Development in Sri Lanka

Robotic Automations

xAI, Elon Musk’s OpenAI rival, is closing on $6B in funding and X, his social network, is already one of its shareholders | TechCrunch


xAI, Elon Musk’s 10-month-old competitor to the AI phenom OpenAI, is raising $6 billion on a pre-money valuation of $18 billion, according to one trusted source close to the deal. The deal – which would give investors one quarter of the company –  is expected to close in the next few weeks unless the terms of the deal change.

The deal terms have changed once already. As of last weekend, Jared Birchall, who heads Musk’s family office, was telling prospective investors that xAI was raising $3 billion at a $15 billion pre-money valuation. Given the number of investors clamoring to get into the deal, those numbers were quickly adjusted. 

Says our source, “We all received an email that basically said, ‘It’s now $6B on $18B, and don’t complain because a lot of other people want in.”

Investors who’ve been lobbying to get into the deal for months hardly minded. Sequoia Capital and Future Ventures, the venture fund co-founded by Musk’s longtime friend Steve Jurvetson, are participating in the round.

Other participants are likely to include Valor Equity Partners and Gigafund, whose founders are also part of the inner circle of Musk, who famously blends the personal and the private. (Outreach to these investors went unreturned; xAI does not have a press function.)

Jurvetson sits on the board of SpaceX and was a director at Tesla until 2020. Gigafund co-founder Luke Nosek, who previously co-founded Founders Fund with investor Peter Thiel, was the first venture investor to write a check to SpaceX and has sat on its board since. Valor founder Antonio Gracias was among the earliest investors in Tesla; like Jurveston, he’s a former Tesla director and is also on the board of SpaceX.

Our source said it’s not entirely clear to every other investor who is in the deal because of the way the commitments were garnered. “It’s a Zoom call and it’s just you and Elon and Jared [on the other side] at a table with some engineers.”

The pitch, says this individual, is captivating.

xAI’s marketing literature already makes clear that the outfit’s ambition is to connect the digital and physical worlds, but it may not be widely understood that Musk plans to do this by pulling in data from each of his companies, which include Tesla, SpaceX, his tunneling outfit Boring Company, and Neuralink, which develops computer interfaces that can be implanted in human brains.

Of course, another of Musk’s companies is X. The social media platform has already incorporated xAI’s months-old chatbot, Grok, into the platform as a paid add-on.

It’s just one piece of what Musk tells investors will become a sprawling virtual cycle. With Grok, for example, X is both a customer and provides Grok with massive distribution. Eventually (goes the pitch), Grok will be fed data from Musk’s other companies, helping it to master the physical world in potentially endless ways, starting with truly self-driving cars.

Another likely beneficiary would be Tesla’s humanoid robot, Optimus. Today the Tesla robot is still in the lab, but Musk told analysts on a call earlier this week that Optimus will be able to perform tasks in Tesla’s factories by the end of this year. Even if that timeline proves ambitious, these slick assistants may be able to do more — and faster than previously imagined — if Musk’s overarching vision plays out.

In the meantime, the most immediate beneficiary of xAI’s burgeoning momentum may be X itself. Though the platform has become something of a toxic cesspool in the 1.5 years since Musk bought it and subsequently lost much of its value, Musk had already seen to it that X owns a stake in xAI, so it will benefit from whatever upside the AI outfit sees.

What it all means for OpenAI — which became the fastest growing startup in history last year —  is an open question. Musk has had OpenAI in his crosshairs since the outfit’s surge began, following the release of its ChatGPT chatbot.

Musk cofounded OpenAI in 2015 and left its board in 2018 over disagreements about the direction of the outfit, which began life as a nonprofit and later evolved into a for-profit entity. Musk has since publicly harangued OpenAI cofounder Sam Altman and poked fun at the brand, proposing that it instead call itself ClosedAI.

Last month, when Musk open sourced the architecture of xAI’s earliest chatbot “Grok-1,” meaning that anyone can now download and alter it, the move was another part of his ongoing campaign to distinguish his efforts from OpenAI, which has not shared its secret sauce with the world, and which Musk is now suing.




Software Development in Sri Lanka

Robotic Automations

Stainless is helping OpenAI, Anthropic and others build SDKs for their APIs | TechCrunch


Besides a focus on generative AI, what do AI startups like OpenAI, Anthropic and Together AI share in common? They use Stainless, a platform created by ex-Stripe staffer Alex Rattray, to generate SDKs for their APIs.

Rattray, who studied economics at the University of Pennsylvania, has been building things for as long as he can remember, from an underground newspaper in high school to a bike-share program in college. Rattray picked up programming on the side while at UPenn, which led to a job at Stripe as an engineer on the developer platform team.

At Stripe, Rattray helped to revamp API documentation and launch the system that powers Stripe’s API client SDK. It’s while working on those projects Rattray observed there wasn’t an easy way for companies, including Stripe, to build SDKs for their APIs at scale.

“Handwriting the SDKs couldn’t scale,” he told TechCrunch. “Today, every API designer has to settle a million and one ‘bikeshed’ questions all over again, and painstakingly enforce consistency around these decisions across their API.”

Now, you might be wondering, why would a company need an SDK if it offers an API? APIs are simply protocols, enabling software components to communicate with each other and transfer data. SDKs, on the other hand, offer a set of software-crafting tools that plug into APIs. Without an SDK to accompany an API, API users are forced to read API docs and build everything themselves, which isn’t the best experience.

Rattray’s solution is Stainless, which takes in an API spec and generates SDKs in a range of programming languages including Python, TypeScript, Kotlin, Go and Java. As APIs evolve and change, Stainless’ platform pushes those updates with options for versioning and publishing changelogs.

“API companies today have a team of several people building libraries in each new language to connect to their API,” Rattray said. “These libraries inevitably become inconsistent, fall out of date and require constant changes from specialist engineers. Stainless fixes that problem by generating them via code.”

Stainless isn’t the only API-to-SDK generator out there. There’s LibLab and Speakeasy, to name a couple, plus longstanding open source projects such as the OpenAPI Generator.

Stainless, however, delivers more “polish” than most others, Rattray said, thanks partly to its use of generative AI.

“Stainless uses generative AI to produce an initial ‘Stainless config’ for customers, which is then up to them to fine-tune to their API,” he explained. “This is particularly valuable for AI companies, whose huge user bases includes many novice developers trying to integrate with complex features like chat streaming and tools.”

Perhaps that’s what attracted customers like OpenAI, Anthropic and Together AI, along with Lithic, LangChain, Orb, Modern Treasury and Cloudflare. Stainless has “dozens” of paying clients in its beta, Rattray said, and some of the SDKs it’s generated, including OpenAI’s Python SDK, are getting millions of downloads per week.

“If your company wants to be a platform, your API is the bedrock of that,” he said. “Great SDKs for your API drive faster integration, broader feature adoption, quicker upgrades and trust in your engineering quality.”

Most customers are paying for Stainless’ enterprise tier, which comes with additional white-glove services and AI-specific functionality. Publishing a single SDK with Stainless is free. But companies have to fork over between $250 per month and $30,000 per year for multiple SDKs across multiple programming languages.

Rattray bootstrapped Stainless “with revenue from day one,” he said, adding that the company could be profitable as soon as this year; annual recurring revenue is hovering around $1 million. But Rattray opted instead to take on outside investment to build new product lines.

Stainless recently closed a $3.5 million seed round with participation from Sequoia and The General Partnership.

“Across the tech ecosystem, Stainless stands out as a beacon that elevates the developer experience, rivaling the high standard once set by Stripe,” said Anthony Kline, partner at The General Partnership. “As APIs continue to be the core building blocks of integrating services like LLMs into applications, Alex’s first-hand experience pioneering Stripe’s API codegen system uniquely positions him to craft Stainless into the quintessential platform for seamless, high-quality API interactions.”

Stainless has a 10-person team based in New York. Rattray expects headcount to grow to 15 or 20 by the end of the year.


Software Development in Sri Lanka

Robotic Automations

Why code-testing startup Nova AI uses open source LLMs more than OpenAI


It is a universal truth of human nature that the developers who build the code should not be the ones to test it. First of all, most of them pretty much detest that task. Second, like any good auditing protocol, those who do the work should not be the ones who verify it.

Not surprisingly, then, code testing in all its forms  –  usability,  language- or task-specific tests, end-to-end testing – has been a focus of a growing cadre of generative AI startups. Every week, TechCrunch covers another one like  Antithesis (raised $47 million); CodiumAI (raised $11 million) QA Wolf (raised $20 million). And new ones are emerging all the time, like new Y Combinator graduate Momentic.

Another is year-old startup Nova AI, an Unusual Academy accelerator grad that’s raised a $1 million pre-seed round. It is attempting to best its competitors with its end-to-end testing tools by breaking many of the Silicon Valley rules of how startups should operate, founder CEO Zach Smith tells TechCrunch.

Whereas the standard Y Combinator approach is to start small, Nova AI is aiming at mid-size to large enterprises with complex code-bases and a burning need now. Smith declined to name any customers using or testing its product except to describe them as mostly late-stage (series C or beyond) venture-backed startups in ecommerce, fintech or consumer products, and “heavy user experiences. Downtime for these features is costly.”

Nova AI’s tech sifts through its customers’ code to automatically build tests automatically using GenAI. It is particularly geared toward continuous integration and continuous delivery/deployment (CI/CD) environments where engineers are constantly shipping bits and pieces into their production code.

The idea for Nova AI came from the experiences Smith and his cofounder Jeffrey Shih had when they were engineers working for big tech companies. Smith is a former Googler who worked on cloud-related teams that helped customers use a lot of automation technology. Shih had previously worked at Meta (also at Unity and Microsoft before that) with a rare AI speciality involving synthetic data. They’ve since added a third cofounder, AI data scientist Henry Li.

Another rule Nova AI is not following: while boatloads of AI startups are building on top of OpenAI’s industry leading GPT, Nova AI is using OpenAI’s Chat GPT-4 as little as possible, only to help it generate some code and to do some labeling tasks. No customer data is being fed to OpenAI.

While OpenAI promises that the data of those on a paid business plan is not being used to train its models, enterprises still do not trust OpenAI, Smith tells us. “When we’re talking to large enterprises, they’re like, ‘We don’t want our data going into OpenAI,” Smith said.

The engineering teams of large companies are not the only ones that feel this way. OpenAI is fending off a number of lawsuits from those who don’t want it to use their work for model training, or believe their work wound up, unauthorized and unpaid for, in its outputs.

Nova AI is instead heavily relying on open source models like Llama developed by Meta and StarCoder (from the BigCoder community, which was developed by ServiceNow and Hugging Face), as well as building its own models. They aren’t yet using Google’s Gemma with customers, but have tested it and “seen good results,” Smith says.

For instance, he explains that a common use for OpenAI GPT4 is to “produce vector embeddings” on data so LLM models can use the vectors for semantic search. Vector embeddings translate chunks of text into numbers so the LLM can perform various operations, such as cluster them with other chunks of similar text. Nova AI is using OpenAI’s GPT4 for this on the customer’s source code, but is going to lengths not to send any data into OpenAI.

“In this case, instead of using OpenAI’s embedding models, we deploy our own open-source embedding models so that when we need to run through every file, we aren’t just sending it to OpenAi,” Smith explained.

While not sending customer data to OpenAI appeases nervous enterprises, open source AI models are also cheaper and more than sufficient for doing targeted specific tasks, Smith has found. In this case, they work well for writing tests.

“The open LLM industry is really proving that they can beat GPT 4 and these big domain providers, when you go really narrow,” he said. “We don’t have to provide some massive model that can tell you what your grandma wants for her birthday. Right? We need to write a test. And that’s it. So our models are fine-tuned specifically for that.”

Open source models are also progressing quickly. For instance, Meta recently introduced a new version of Llama that’s earning accolades in technology circles and that may convince more AI startups to look at OpenAI alternatives.


Software Development in Sri Lanka

Robotic Automations

Humane’s Ai Pin considers life beyond the smartphone | TechCrunch


Nothing lasts forever. Nowhere is the truism more apt than in consumer tech. This is a land inhabited by the eternally restless — always on the make for the next big thing. The smartphone has, by all accounts, had a good run. Seventeen years after the iPhone made its public debut, the devices continue to reign. Over the last several years, however, the cracks have begun to show.

The market plateaued, as sales slowed and ultimately contracted. Last year was punctuated by stories citing the worst demand in a decade, leaving an entire industry asking the same simple question: what’s next? If there was an easy answer, a lot more people would currently be a whole lot richer.

Smartwatches have had a moment, though these devices are largely regarded as accessories, augmenting the smartphone experience. As for AR/VR, the best you can really currently say is that — after a glacial start — the jury is still very much out on products like the Meta Quest and Apple Vision Pro.

When it began to tease its existence through short, mysterious videos in the summer of 2022, Humane promised a glimpse of the future. The company promised an approach every bit as human-centered as its name implied. It was, at the very least, well-funded, to the tune of $100 million+ (now $230 million), and featured an AI element.

The company’s first product, the Humane Ai Pin, arrives this week. It suggests a world where being plugged in doesn’t require having one’s eyes glued to a screen in every waking moment. It’s largely — but not wholly — hands-free. A tap to the front touch panel wakes up the system. Then it listens — and learns.

Beyond the smartphone

Image Credits: Darrell Etherington/TechCrunch

Humane couldn’t ask for better timing. While the startup has been operating largely in stealth for the past seven years, its market debut comes as the trough of smartphone excitement intersects with the crest of generative AI hype. The company’s bona fides contributed greatly to pre-launch excitement. Founders Bethany Bongiorno and Imran Chaudhri were previously well-placed at Apple. OpenAI’s Sam Altman, meanwhile, was an early and enthusiastic backer.

Excitement around smart assistants like Siri, Alexa and Google Home began to ebb in the last few years, but generative AI platforms like OpenAI’s ChatGPT and Google’s Gemini have flooded that vacuum. The world is enraptured with plugging a few prompts into a text field and watching as the black box spits out a shiny new image, song or video. It’s novel enough to feel like magic, and consumers are eager to see what role it will play in our daily lives.

That’s the Ai Pin’s promise. It’s a portal to ChatGPT and its ilk from the comfort of our lapels, and it does this with a meticulous attention to hardware design befitting its founders’ origins.

Press coverage around the startup has centered on the story of two Apple executives having grown weary of the company’s direction — or lack thereof. Sure, post-Steve Jobs Apple has had successes in the form of the Apple Watch and AirPods, but while Tim Cook is well equipped to create wealth, he’s never been painted as a generational creative genius like his predecessor.

If the world needs the next smartphone, perhaps it also needs the next Apple to deliver it. It’s a concept Humane’s founders are happy to play into. The story of the company’s founding, after all, originates inside the $2.6 trillion behemoth.

Start spreading the news

Image Credits: Alexander Spatari (opens in a new window) / Getty Images

In late March, TechCrunch paid a visit to Humane’s New York office. The feeling was tangibly different than our trip to the company’s San Francisco headquarters in the waning months of 2023. The earlier event buzzed with the manic energy of an Apple Store. It was controlled and curated, beginning with a small presentation from Bongiorno and Chaudhri, and culminating in various stations staffed by Humane employees designed to give a crash course on the product’s feature set and origins.

Things in Manhattan were markedly subdued by comparison. The celebratory buzz that accompanies product launches has dissipated into something more formal, with employees focused on dotting I’s and crossing T’s in the final push before product launch. The intervening months provided plenty of confirmation that the Ai Pin wasn’t the only game in town.

January saw the Rabbit R1’s CES launch. The startup opted for a handheld take on generative AI devices. The following month, Samsung welcomed customers to “the era of Mobile AI.”  The “era of generative AI” would have been more appropriate, as the hardware giant leveraged a Google Gemini partnership aimed at relegating its bygone smart assistant Bixby to a distant memory. Intel similarly laid claim to the “AI PC,” while in March Apple confidently labeled the MacBook Air the “world’s best consumer laptop for AI.”

At the same time, Humane’s news standing stumbled through reports of a small layoff round and small delay in preorder fulfillment. Both can be written off as products of immense difficulties around launching a first-generation hardware product — especially under the intense scrutiny few startups see.

For the second meeting with Bongiorno and Chaudhri, we gathered around a conference table. The first goal was an orientation with the device, ahead of review. I’ve increasingly turned down these sorts of meeting requests post-pandemic, but the Ai Pin represents a novel enough paradigm to justify a sit-down orientation with the device. Humane also sent me home with a 30-minute intro video designed to familiarize users — not the sort of thing most folks require when, say, upgrading a phone.

More interesting to me, however, was the prospect of sitting down with the founders for the sort of wide-ranging interview we weren’t able to do during last year’s San Francisco event. Now that most of the mystery is gone, Chaudhri and Bongiorno were more open about discussing the product — and company — in-depth.

Origin story

Humane co-founders Bethany Bongiorno and Imran Chaudhri.

One Infinite Loop is the only place one can reasonably open the Humane origin story. The startup’s founders met on Bongiorno’s first day at Apple in 2008, not long after the launch of the iPhone App Store. Chaudhri had been at the company for 13 years at that point, having joined at the depths of the company’s mid-90s struggles. Jobs would return to the company two years later, following its acquisition of NeXT.

Chaudhri’s 22 years with the company saw him working as director of Design on both the hardware and software sides of projects like the Mac and iPhone. Bongiorno worked as project manager for iOS, macOS and what would eventually become iPadOS. The pair married in 2016 and left Apple the same year.

“We began our new life,” says Bongiorno, “which involves thinking a lot about where the industry was going and what we were passionate about.” The pair started consulting work. However, Bongiorno describes a seemingly mundane encounter that would change their trajectory soon after.

Image Credits: Humane

“We had gone to this dinner, and there was a family sitting next to us,” she says. “There were three kids and a mom and dad, and they were on their phones the entire time. It really started a conversation about the incredible tool we built, but also some of the side effects.”

Bongiorno adds that she arrived home one day in 2017 to see Chaudhri pulling apart electronics. He had also typed out a one-page descriptive vision for the company that would formally be founded as Humane later the same year.

According to Bongiorno, Humane’s first hardware device never strayed too far from Chaudhri’s early mockups. “The vision is the same as what we were pitching in the early days,” she says. That’s down to Ai Pin’s most head-turning feature, a built-in projector that allows one to use the surface of their hand as a kind of makeshift display. It’s a tacit acknowledgement that, for all of the talk about the future of computing, screens are still the best method for accomplishing certain tasks.

Much of the next two years were spent exploring potential technologies and building early prototypes. In 2018, the company began discussing the concept with advisors and friends, before beginning work in earnest the following year.

Staring at the sun

In July 2022, Humane tweeted, “It’s time for change, not more of the same.” The message, which reads as much like a tagline as a mission statement, was accompanied by a minute-long video. It opens in dramatic fashion on a rendering of an eclipse. A choir sings in a bombastic — almost operatic — fashion, as the camera pans down to a crowd. As the moon obscures the sunlight, their faces are illuminated by their phone screens. The message is not subtle.

The crowd opens to reveal a young woman in a tank top. Her head lifts up. She is now staring directly into the eclipse (not advised). There are lyrics now, “If I had everything, I could change anything,” as she pushes forward to the source of the light. She holds her hand to the sky. A green light illuminates her palm in the shape of the eclipse. This last bit is, we’ll soon discover, a reference to the Ai Pin’s projector. The marketing team behind the video is keenly aware that, while it’s something of a secondary feature, it’s the most likely to grab public attention.

As a symbol, the eclipse has become deeply ingrained in the company’s identity. The green eclipse on the woman’s hand is also Humane’s logo. It’s built into the Ai Pin’s design language, as well. A metal version serves as the connection point between the pin and its battery packs.

Image Credits: Brian Heater

The company is so invested in the motif that it held an event on October 14, 2023, to coincide with a solar eclipse. The device comes in three colors: Eclipse, Equinox and Lunar, and it’s almost certainly no coincidence that this current big news push is happening a mere days after another North American solar eclipse.

However, it was on the runway of a Paris fashion show in September that the Ai Pin truly broke cover. The world got its first good look at the product as it was magnetically secured to the lapels of models’ suit jackets. It was a statement, to be sure. Though its founders had left Apple a half-dozen years prior, they were still very much invested in industrial design, creating a product designed to be a fashion accessory (your mileage will vary).

The design had evolved somewhat since conception. For one thing, the top of the device, which houses the sensors and projector, is now angled downward, so the Pin’s vantage point is roughly the same as its wearer. An earlier version with a flatter service would unintentionally angle the pin upward when worn on certain chest types. Nailing down a more universal design required a lot of trial and error with a lot of different people in different shapes and sizes.

“There’s an aspect of this particular hardware design that has to be compassionate to who’s using it,” says Chaudhri. “It’s very different when you have a handheld aspect. It feels more like an instrument or a tool […] But when you start to have a more embodied experience, the design of the device has to be really understanding of who’s wearing it. That’s where the compassion comes from.”

Year of the Rabbit?

Image Credits: rabbit

Then came competition. When it was unveiled at CES on January 9, the Rabbit R1 stole the show.

“The phone is an entertainment device, but if you’re trying to get something done it’s not the highest efficiency machine,” CEO and founder Jesse Lyu noted at the time. “To arrange dinner with a colleague we needed four-five different apps to work together. Large language models are a universal solution for natural language, we want a universal solution for these services — they should just be able to understand you.”

While the R1’s product design is novel in its own right, it’s arguably a more traditional piece of consumer electronics than Ai Pin. It’s handheld and has buttons and a screen. At its heart, however, the functionality is similar. Both are designed to supplement smartphone usage and are built around a core of LLM-trained AI.

The device’s price point also contributed to its initial buzz. At $200, it’s a fraction of the Ai Pin’s $699 starting price. The more familiar form factor also likely comes with a smaller learning curve than Humane’s product.

Asked about the device, Bongiorno makes the case that another competitor only validates the space. “I think it’s exciting that we kind of sparked this new interest in hardware,” she says. “I think it’s awesome. Fellow builders. More of that, please.”

She adds, however, that the excitement wasn’t necessarily there at Humane from the outset. “We talked about it internally at the company. Of course people were nervous. They were like, ‘what does this mean?’ Imran and I got in front of the company and said, ‘guys, if there weren’t people who followed us, that means we’re not doing the right thing. Then something’s wrong.”

Bongiorno further suggests that Rabbit is focused on a different use case, as its product requires focus similar to that of a smartphone — though both Bongiorno and Chaudhri have yet to use the R1.

A day after Rabbit unveiled the product, Humane confirmed that it had laid off 10 employees — amounting to 4% of its workforce. It’s a small fraction of a company with a small headcount, but the timing wasn’t great, a few months ahead of the product’s official launch. The news also found its long-time CTO, Patrick Gates, exiting the C-suite role for an advisory job.

“The honest truth is we’re a company that is constantly going through evolution,” Bongiorno says of the layoffs. “If you think about where we were five years ago, we were in R&D. Now we are a company that’s about to ship to customers, that’s about to have to operate in a different way. Like every growing and evolving company, changes are going to happen. It’s actually really healthy and important to go through that process.”

The following month, the company announced that its pins would now be shipping in mid-April. It was a slight delay from the original March ship date, though Chaudhri offers something of a Bill Clinton-style “it depends on what your definition of ‘is’ is” answer. The company, he suggests, defines “shipping” as leaving the factory, rather than the more industry-standard definition of shipping to customers.

“We said we were shipping in March and we are shipping in March,” he says. The devices leave the factory. The rest is on the U.S. government and how long they take when they hold things in place — tariffs and regulations and other stuff.”

Money moves

Image Credits: Brian Heater

No one invests $230 million in a startup out of the goodness of their heart. Sooner or later, backers will be looking for a return. Integral to Humane’s path to positive cashflow is a subscription service that’s required to use the thing. The $699 price tag comes with 90 days free, then after that, you’re on the hook for $24 a month.

That fee brings talk, text and data from T-Mobile, cloud storage and — most critically — access to the Ai Bus, which is foundational to the device’s operation. Humane describes it thusly, “An entirely new AI software framework, the Ai Bus, brings Ai Pin to life and removes the need to download, manage, or launch apps. Instead, it quickly understands what you need, connecting you to the right AI experience or service instantly.”

Investors, of course, love to hear about subscriptions. Hell, even Apple relies on service revenue for growth as hardware sales have slowed.

Bongiorno alludes to internal projections for revenue, but won’t go into specifics for the timeline. She adds that the company has also discussed an eventual path to IPO even at this early stage in the process.

“If we weren’t, that would not be responsible for any company,” she says. “These are things that we care deeply about. Our vision for Humane from the beginning was that we wanted to build a company where we could build a lot of things. This is our first product, and we have a large roadmap that Imran is really passionate about of where we want to go.”

Chaudhri adds that the company “graduated beyond sketches” for those early products. “We’ve got some early photos of things that we’re thinking about, some concept pieces and some stuff that’s a lot more refined than those sketches when it was a one-man team. We are pretty passionate about the AI space and what it actually means to productize AI.”




Software Development in Sri Lanka

Robotic Automations

Watch: Meta's new Llama 3 models give open-source AI a boost


New AI models from Meta are making waves in technology circles. The two new models, part of the Facebook parent company’s Llama line of artificial intelligence tools, are both open-source, helping them stand apart from competing offerings from OpenAI and other well-known names.

Meta’s new Llama models have differently sized underlying datasets, with the Llama 3 8B model featuring eight billion parameters, and the Llama 3 70B model some seventy billion parameters. The more parameters, the more powerful the model, but not every AI task needs the largest possible dataset.

The company’s new models, which were trained on 24,000 GPU clusters, perform well across benchmarks that Meta put them up against, besting some rivals’ models that were already in the market. What matters for those of us not competing to build and release the most capable, or largest AI models, what we care about is that they are still getting better with time. And work. And a lot of compute.

While Meta takes an open-source approach to AI work, its competitors are often prefer more closed-source work. OpenAI, despite its name and history, offers access to its models, but not their source code. There’s a healthy debate in the world of AI regarding which approach is better, for both speed of development and also safety. After all, some technologists — and some computing doomers, to be clear — are worried that AI tech is developing too fast and could prove dangerous to democracies and more.

For now, Meta is keeping the AI fires alight, offering a new challenge to its peers and rivals to best their latest. Hit play, and let’s talk about it!


Software Development in Sri Lanka

Robotic Automations

ChatGPT is a squeeze away with Nothing’s upgraded earbuds | TechCrunch


Nothing today announced a pair of refreshes to its earbud line. The naming conventions are a touch convoluted here, but the Nothing Ear is an update to the Nothing Ear (2), while the Nothing Ear (a) is more of a spiritual successor to the Nothing Ear Stick.

The most notable bit of today’s news, however, is probably Nothing’s embrace of ChatGPT this time out. As the “AI smartphones” of the world are battling with devices like Humane’s Ai Pin and the Rabbit R1 for mindshare, the London-based mobile company seems to have skipped a step by integrating the tech into their new earbuds.

More specifically, if you have the ChatGPT app installed on a connected Nothing handset, you’ll be able to ask the generative AI program questions with a pinch of the headphones’ stem. Think Siri/Google Assistant/Alexa-style access on a pair of earbuds, only this one taps directly into OpenAI’s wildly popular platform.

“Nothing will also improve the Nothing smartphone user experience in Nothing OS by embedding system-level entry points to ChatGPT, including screenshot sharing and Nothing-styled widgets,” the company notes. The feature will be available exclusively for both of the new earbuds.

Preorders for both buds start today. Nothing says the Ear buds bring improved sound over their predecessors, courtesy of a new driver system. That, meanwhile, has allowed for more space for the battery, with up to 25% more life than the Ear (2). A “smart” active noise-canceling system adapts accordingly to environmental noise and checks for “leakage” between the buds and the ear canal.

The Ear (a) also brings noise-canceling improvements, but what Nothing really seems to be pushing is the bright yellow colorway, which bucks the black and white aesthetic that has defined its devices.

The Ear and Ear (a) are both reasonably priced at $149 and $99, respectively. Shipping starts April 22.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber