From Digital Age to Nano Age. WorldWide.

Tag: built

Robotic Automations

Buymeacoffee's founder has built an AI-powered voice note app | TechCrunch


AI-powered tools like OpenAI’s Whisper have enabled many apps to make transcription an integral part of their feature set for personal note-taking, and the space has quickly flourished as a result. Apps like AudioPen, Cleft Notes, and TalkNotes have proliferated across app stores and the Internet, but most offer a pretty limited feature set: They […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

How PayJoy built $300M in revenue by letting the underserved use their smartphones as collateral for loans | TechCrunch


Lerato Motloung is a mother of two who works in a supermarket in Johannesburg, South Africa. After her phone was stolen, Motloung had to go without a mobile phone for nine months because she could not afford a new one. Then, in February 2024, she saw a sign about PayJoy, a startup that offers lending to the underserved in emerging markets. She was soon able to buy her first smartphone.

Motloung is one of millions of customers that San Francisco–based PayJoy has helped since its 2015 inception. (She was its 10 millionth customer.) The company’s mission is to “provide a fair and responsible entry point for individuals in emerging markets to enter the modern financial system, build credit, achieve economic freedom, and access digital connectivity.”

Image Credits: PayJoy

PayJoy became a public benefit corporation last year and is an example of a company attempting to do good while also generating meaningful revenue and running a profitable business. And, unlike other startups offering loans to the underserved, it’s doing so in a way that’s not predatory, it says.

“We meet customers where they are — even with no bank account or formal credit history, we create access to financial services and carve a path into the financial system,” said co-founder and CEO Doug Ricket.

PayJoy is applying a buy now, pay-as-you-go model to the estimated 3 billion adults globally who don’t have credit by allowing them to purchase smartphones and pay weekly for a 3- to 12-month period. The phones themselves are used as collateral for the loan.

While the loans are interest free, with no late or hidden fees, the company does mark up the price it charges for the phones by a “multiple,” Ricket said. But it shares the full price upfront before customers sign a contract.

“Users will never pay more than the disclosed amount and can return their phone and walk away debt-free at any time,” he says.

If a customer does miss a payment, their device is locked and is unusable outside of contacting PayJoy or emergency services. To unlock the device, the user needs to make a single weekly payment and the device will then be unlocked for 7 days.

Adds Ricket: “Even upon serious delinquency, PayJoy does not repossess the device and does not communicate individual loan performance to retail partners. PayJoy does report loan performance to credit bureaus including both positive and negative history, so their credit report will be affected accordingly.”

By the fourth quarter of 2023, PayJoy had achieved an annualized run rate of more than $300 million, Ricket told TechCrunch exclusively. That’s up from $10 million in 2020, when it first introduced lending. And the company was “net income profitable” in 2023. It also managed to raise significant capital during a challenging fundraising environment. Last September, PayJoy announced that it had secured $150 million in Series C equity funding and $210 million in debt financing. Warburg Pincus led its equity raise, which included participation from Invus, Citi Ventures and prior lead investors Union Square Ventures and Greylock.

PayJoy has come a long way since TechCrunch first profiled it in December 2015 when it had secured $4.3 million in equity and debt about 10 months after its inception.

Image Credits: PayJoy

Today, the company operates in seven countries across regions such as Latin America, India, Africa and most recently, the Philippines — providing over $2 billion of credit to date. In October of 2023, the company launched PayJoy Card in Mexico, providing customers who have successfully repaid their smartphone loans with a revolving line of credit. Ricket says that PayJoy can “enable cheaper credit and … reduce default rates” by using data science and machine learning to underwrite its loans to assess a customer’s creditworthiness. He says 47% of its customers are women, 40% are new to credit and 37% are first-time smartphone users.

Ricket was inspired to start PayJoy after serving in the Peace Corps following his graduation from MIT. He then spent two years as a volunteer teacher in West Africa, where he became interested in technology in the context of international development. After the Peace Corps, he landed at Google, where he helped create the world’s first complete digital map.

Ricket then moved back to West Africa where he worked for D.Light Design in the pay-as-you-go solar industry. All of that experience has been combined in PayJoy.

The company is on track to achieve over 35% revenue growth this year, with strong momentum in Brazil and new product offerings in development, according to Ricket. Presently, the company has 1,400 employees. It has raised more than $400 million in debt and equity over its lifetime.

Want more fintech news in your inbox? Sign up for TechCrunch Fintech here.


Software Development in Sri Lanka

Robotic Automations

Naval Ravikant's Airchat is a social app built around talk, not text | TechCrunch


Airchat is a new social media app that encourages users to “just talk.”

A previous version of Airchat was released last year, but the team — led by AngelList founder Naval Ravikant and former Tinder product exec Brian Norgard — rebuilt the app and relaunched it on iOS and Android yesterday. Currently invite-only, Airchat is already ranked #27 in social networking on Apple’s App Store.

Visually, Airchat should feel pretty familiar and intuitive, with the ability to follow other users, scroll through a feed of posts, then reply to, like, and share those posts. The difference is that the posts and replies are audio recordings, which the app then transcribes.

When you open Airchat, messages automatically start playing, and you quickly cycle through them by swiping up and down. If you’re so inclined, you can actually pause the audio and just read text; users can also share photos and video. But audio seems to be what everyone’s focused on, and what Ravikant describes as transforming the dynamic compared to text-based social apps.

After joining Airchat this morning, most of the posts I saw were about the app itself, with Ravikant and Norgard answering questions and soliciting feedback.

“Humans are all meant to get along with other humans, it just requires the natural voice,” Ravikant said. “Online text-only media has given us this delusion that people can’t get along, but actually everybody can get along.”

This isn’t the first time tech startups have bet on voice as the next big social media thing. But Airchat’s asynchronous, threaded posts make for a pretty different experience than the live chat rooms that briefly flourished on Clubhouse and Twitter Spaces. Norgard argued that this approach removes the stage fright barrier to participating, because “you can take as many passes at composing a message on here as you like, and nobody knows.”

In fact, he said that in conversations with early users, the team found that “most of the people using AirChat today are very introverted and shy.”

Personally, I haven’t convinced myself to post anything yet. I was more interested in seeing how others were using the app — plus, I have a love-hate relationship with the sound of my voice.

Still, there’s something to be said for hearing Ravikant and Norgard explain their vision, rather than just reading the transcriptions, which can miss nuances of enthusiasm, intonation, etc. And I’m especially curious to see how deadpan jokes and shitposting translate (or don’t) into audio.

I also struggle a bit with the speed. The app defaults to 2x audio playback, which I thought sounded unnatural, particularly if the whole idea is fostering human connection. You can reset the speed by holding down the pause button, but at 1x, I noticed I’d start skimming when listening to longer posts, then I’d usually skip ahead before listening to the full audio. But maybe that’s fine.

Meanwhile, Ravikant’s belief in the power of voice to cut down on acrimony doesn’t necessarily eliminate the need for content moderation features. He said the feed is powered by “some complex rules around hiding spam and trolls and people that you or they may not want to hear from,” but as of publication he hadn’t not responded to a follow-up user question about content moderation.

Asked about monetization — i.e., when we might start seeing ads, audio or otherwise —  Ravikant said there’s “no monetization pressure on the company whatsoever.” (He described himself as “not the sole investor” but “a big investor” in the company.)

“I could care less about monetization,” he said. “We’ll run this thing on a shoestring if we have to.”




Software Development in Sri Lanka

Robotic Automations

OnePlus went ahead and built its own version of Google Magic Eraser | TechCrunch


OnePlus has always marched to the beat of its own drummer — for better and worse. Take, for example, the company’s latest foray into mobile artificial intelligence, the AI Eraser. Before you ask, no, this is not simply a rebadged version of Google’s longstanding and very good Magic Eraser.

Nope, OnePlus went ahead and built its own version in a bid to show the world that it has AI ambitions of its own. It’s likely the Oppo-owned company has been working on AI Eraser for some time now — though the world has known about Google’s version since the Pixel 6 event back in March 2021 (Magic Editor, meanwhile, debuted a year back at I/O 2023).

From the sound of its press material, the company went and built this thing ground-up, starting with its own first-party large language models.

“AI Eraser is the result of a substantial R&D investment from OnePlus,” the company notes in its press material. “The proprietary LLM behind the new feature has been trained on a vast dataset that allows it to comprehend complex scenes. Through this advanced visual understanding, AI Eraser is able to intelligently substitute unwanted objects with contextually appropriate elements that naturally elevate the photo’s appeal, empowering users with the ability to make high-quality photo edits anywhere and at any time.”

An AI-powered eraser is an undeniably handy feature, but it’s also one that Google knocked out of the park immediately. It’s probably not the best use of one’s R&D resources to go head to head on that feature — especially a feature that is currently available across iOS and Android devices via Google Photos.

More than anything, this appears to be OnePlus’s attempt to plant its flag into what has very much shaped up to be the year of the smartphone. Hopefully next time, it will use those resources to build something that truly differentiates itself from existing properties.

AI is rolling out to OnePlus devices this month, starting with OnePlus 12, OnePlus 12R, OnePlus 11, OnePlus Open and OnePlus Nord CE 4. It will not, however, be coming to the R12-D12.




Software Development in Sri Lanka

Robotic Automations

OpenAI built a voice cloning tool, but you can't use it… yet | TechCrunch


As deepfakes proliferate, OpenAI is refining the tech used to clone voices — but the company insists it’s doing so responsibly.

Today marks the preview debut of OpenAI’s Voice Engine, an expansion of the company’s existing text-to-speech API. Under development for about two years, Voice Engine allows users to upload any 15-second voice sample to generate a synthetic copy of that voice. But there’s no date for public availability yet, giving the company time to respond to how the model is used and abused.

“We want to make sure that everyone feels good about how it’s being deployed — that we understand the landscape of where this tech is dangerous and we have mitigations in place for that,” Jeff Harris, a member of the product staff at OpenAI, told TechCrunch in an interview.

Training the model

The generative AI model powering Voice Engine has been hiding in plain sight for some time, Harris said.

The same model underpins the voice and “read aloud” capabilities in ChatGPT, OpenAI’s AI-powered chatbot, as well as the preset voices available in OpenAI’s text-to-speech API. And Spotify’s been using it since early September to dub podcasts for high-profile hosts like Lex Fridman in different languages.

I asked Harris where the model’s training data came from — a bit of a touchy subject. He would only say that the Voice Engine model was trained on a mix of licensed and publicly available data.

Models like the one powering Voice Engine are trained on an enormous number of examples — in this case, speech recordings — usually sourced from public sites and data sets around the web. Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.

OpenAI is already being sued over allegations the company violated IP law by training its AI on copyrighted content, including photos, artwork, code, articles and e-books, without providing the creators or owners credit or pay.

OpenAI has licensing agreements in place with some content providers, like Shutterstock and the news publisher Axel Springer, and allows webmasters to block its web crawler from scraping their site for training data. OpenAI also lets artists “opt out” of and remove their work from the data sets that the company uses to train its image-generating models, including its latest DALL-E 3.

But OpenAI offers no such opt-out scheme for its other products. And in a recent statement to the U.K.’s House of Lords, OpenAI suggested that it’s “impossible” to create useful AI models without copyrighted material, asserting that fair use — the legal doctrine that allows for the use of copyrighted works to make a secondary creation as long as it’s transformative — shields it where it concerns model training.

Synthesizing voice

Surprisingly, Voice Engine isn’t trained or fine-tuned on user data. That’s owing in part to the ephemeral way in which the model — a combination of a diffusion process and transformer — generates speech.

“We take a small audio sample and text and generate realistic speech that matches the original speaker,” said Harris. “The audio that’s used is dropped after the request is complete.”

As he explained it, the model is simultaneously analyzing the speech data it pulls from and the text data meant to be read aloud, generating a matching voice without having to build a custom model per speaker.

It’s not novel tech. A number of startups have delivered voice cloning products for years, from ElevenLabs to Replica Studios to Papercup to Deepdub to Respeecher. So have Big Tech incumbents such as Amazon, Google and Microsoft — the last of which is a major OpenAI’s investor incidentally.

Harris claimed that OpenAI’s approach delivers overall higher-quality speech.

We also know it will be priced aggressively. Although OpenAI removed Voice Engine’s pricing from the marketing materials it published today, in documents viewed by TechCrunch, Voice Engine is listed as costing $15 per one million characters, or ~162,500 words. That would fit Dickens’ “Oliver Twist” with a little room to spare. (An “HD” quality option costs twice that, but confusingly, an OpenAI spokesperson told TechCrunch that there’s no difference between HD and non-HD voices. Make of that what you will.)

That translates to around 18 hours of audio, making the price somewhat south of $1 per hour. That’s indeed cheaper than what one of the more popular rival vendors, ElevenLabs, charges — $11 for 100,000 characters per month. But it does come at the expense of some customization.

Voice Engine doesn’t offer controls to adjust the tone, pitch or cadence of a voice. In fact, it doesn’t offer any fine-tuning knobs or dials at the moment, although Harris notes that any expressiveness in the 15-second voice sample will carry on through subsequent generations (for example, if you speak in an excited tone, the resulting synthetic voice will sound consistently excited). We’ll see how the quality of the reading compares with other models when they can be compared directly.

Voice talent as commodity

Voice actor salaries on ZipRecruiter range from $12 to $79 per hour — a lot more expensive than Voice Engine, even on the low end (actors with agents will command a much higher price per project). Were it to catch on, OpenAI’s tool could commoditize voice work. So, where does that leave actors?

The talent industry wouldn’t be caught unawares, exactly — it’s been grappling with the existential threat of generative AI for some time. Voice actors are increasingly being asked to sign away rights to their voices so that clients can use AI to generate synthetic versions that could eventually replace them. Voice work — particularly cheap, entry-level work — is at risk of being eliminated in favor of AI-generated speech.

Now, some AI voice platforms are trying to strike a balance.

Replica Studios last year signed a somewhat contentious deal with SAG-AFTRA to create and license copies of the media artist union members’ voices. The organizations said that the arrangement established fair and ethical terms and conditions to ensure performer consent while negotiating terms for uses of synthetic voices in new works, including video games.

ElevenLabs, meanwhile, hosts a marketplace for synthetic voices that allows users to create a voice, verify and share it publicly. When others use a voice, the original creators receive compensation — a set dollar amount per 1,000 characters.

OpenAI will establish no such labor union deals or marketplaces, at least not in the near term, and requires only that users obtain “explicit consent” from the people whose voices are cloned, make “clear disclosures” indicating which voices are AI-generated and agree not to use the voices of minors, deceased people or political figures in their generations.

“How this intersects with the voice actor economy is something that we’re watching closely and really curious about,” Harris said. “I think that there’s going to be a lot of opportunity to sort of scale your reach as a voice actor through this kind of technology. But this is all stuff that we’re going to learn as people actually deploy and play with the tech a little bit.”

Ethics and deepfakes

Voice cloning apps can be — and have been — abused in ways that go well beyond threatening the livelihoods of actors.

The infamous message board 4chan, known for its conspiratorial content, used ElevenLabs’ platform to share hateful messages mimicking celebrities like Emma Watson. The Verge’s James Vincent was able to tap AI tools to maliciously, quickly clone voices, generating samples containing everything from violent threats to racist and transphobic remarks. And over at Vice, reporter Joseph Cox documented generating a voice clone convincing enough to fool a bank’s authentication system.

There are fears bad actors will attempt to sway elections with voice cloning. And they’re not unfounded: In January, a phone campaign employed a deepfaked President Biden to deter New Hampshire citizens from voting — prompting the FCC to move to make future such campaigns illegal.

So aside from banning deepfakes at the policy level, what steps is OpenAI taking, if any, to prevent Voice Engine from being misused? Harris mentioned a few.

First, Voice Engine is only being made available to an exceptionally small group of developers — around 10 — to start. OpenAI is prioritizing use cases that are “low risk” and “socially beneficial,” Harris says, like those in healthcare and accessibility, in addition to experimenting with “responsible” synthetic media.

A few early Voice Engine adopters include Age of Learning, an edtech company that’s using the tool to generate voice-overs from previously cast actors, and HeyGen, a storytelling app leveraging Voice Engine for translation. Livox and Lifespan are using Voice Engine to create voices for people with speech impairments and disabilities, and Dimagi is building a Voice Engine-based tool to give feedback to health workers in their primary languages.

Here’s generated voices from Lifespan:


And here’s one from Livox:

Second, clones created with Voice Engine are watermarked using a technique OpenAI developed that embeds inaudible identifiers in recordings. (Other vendors including Resemble AI and Microsoft employ similar watermarks.) Harris didn’t promise that there aren’t ways to circumvent the watermark, but described it as “tamper resistant.”

“If there’s an audio clip out there, it’s really easy for us to look at that clip and determine that it was generated by our system and the developer that actually did that generation,” Harris said. “So far, it isn’t open sourced — we have it internally for now. We’re curious about making it publicly available, but obviously, that comes with added risks in terms of exposure and breaking it.”

Third, OpenAI plans to provide members of its red teaming network, a contracted group of experts that help inform the company’s AI model risk assessment and mitigation strategies, access to Voice Engine to suss out malicious uses.

Some experts argue that AI red teaming isn’t exhaustive enough and that it’s incumbent on vendors to develop tools to defend against harms that their AI might cause. OpenAI isn’t going quite that far with Voice Engine — but Harris asserts that the company’s “top principle” is releasing the technology safely.

General release

Depending on how the preview goes and the public reception to Voice Engine, OpenAI might release the tool to its wider developer base, but at present, the company is reluctant to commit to anything concrete.

Harris did give a sneak peek at Voice Engine’s roadmap, though, revealing that OpenAI is testing a security mechanism that has users read randomly generated text as proof that they’re present and aware of how their voice is being used. This could give OpenAI the confidence it needs to bring Voice Engine to more people, Harris said — or it might just be the beginning.

“What’s going to keep pushing us forward in terms of the actual voice matching technology is really going to depend on what we learn from the pilot, the safety issues that are uncovered and the mitigations that we have in place,” he said. “We don’t want people to be confused between artificial voices and actual human voices.”

And on that last point we can agree.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber