From Digital Age to Nano Age. WorldWide.

Tag: tool

Robotic Automations

For Dataplor’s data intelligence tool, it’s all about location, location, location | TechCrunch


If you want to get your product in a grocery store in Mexico City, Dataplor has global location intelligence to help you do that.

Founder and CEO Geoffrey Michener started the company in 2016 to index micro businesses in emerging markets. The company raised $2 million in 2019 to bring Latin American food delivery vendors online.

Dataplor uses artificial intelligence, machine learning, large language models and a purpose-built technology platform to take in public domain data.

While that is not totally unique — there are companies like ThoughSpot, Esri and Near doing something similar around business and location intelligence — Dataplor’s “secret sauce” is combining all of that technology and public domain data with a human factor. The company recruits and trains over 100,000 human validators, called Explorers, to validate all the data via computer. In addition, no personally identifiable information is used.

What results is answers to questions like “How many Taco Bell locations were opened across South America last year?” or “What percentage of Walmarts in Europe are located near a fast food restaurant?”

The company has since amassed more than 300 million point of interest records (POI) on over 15,000 brands — data like physical location, hours, contact information, whether they accept credit cards and consumer sentiment — in over 200 countries and territories.

Dataplor then licenses that data to companies in a wide variety of industries, including third-party logistics, real estate and finance, like American Express, iZettle and PayPal. More than 35 Fortune 500 brands already use Dataplor.

Dataplor’s location intelligence tool showing close rates. Image Credits: Dataplor

“Company 10-Ks are always six months late, so it’s hard to know if a company, for example, Starbucks, what their open or close rates are,” Michener told TechCrunch. “Other companies also want to know if one of their competitors closed or what are the other businesses around there so they can see if they can put a location there. We are trying to empower their decision-making.”

The company has also grown revenue by an average of 2.5x year-over-year since 2020, and is on track for profitability this year, Michener said.

Now the company wants to grow even faster, so Dataplor raised $10.6 million in Series A funding led by Spark Capital. Spark is known for early investments in Slack, Affirm, Postmates, Discord and Deel. The round also includes participation from Quest Venture Partners, Acronym Venture Capital, Circadian Ventures, Two Lanterns Venture Partners and APA Venture Partners. In total, the company has raised $20.3 million.

Dataplor intends to use the funding to make strategic hires and accelerate its sales and brand presence, Michener said.

For the Series A, Spark and Alex Finkelstein, the general partner who led the deal, “had a lot of conviction into what Dataplor was doing,” which was why Michener chose them to lead, he said. As part of the investment, Finkelstein joins Dataplor’s board of directors, which includes John Frankel, founding partner of ffVC.

“Alex saw the bigger picture, and he saw that while we’re not just a POI or places data company, we are helping people get somewhere or sell a product,” Michener said. “He said that by knowing everything about a business, and then across 100 million places, ‘That’s a really big opportunity. No one’s done that before.’ It really resonated, and if we share that same vision, we can use capital to grow and to grow efficiently and effectively, why not? Let’s go do it.”

Have a juicy tip or lead about happenings in the venture world? Send tips to Christine Hall at [email protected] or via this Signal link. Anonymity requests will be respected. 


Software Development in Sri Lanka

Robotic Automations

MIT tool shows climate change could cost Texans a month and a half of outdoor time by 2080 | TechCrunch


There are a lot of ways to describe what’s happening to the Earth’s climate: Global warming. Climate change. Climate crisis. Global weirding. They all try to capture in different ways the phenomena caused by our world’s weather systems gone awry. Yet despite a thesaurus-entry’s worth of options, it’s still a remarkably difficult concept to make relatable.

Researchers at MIT might finally have an answer, though. Instead of predicting Category 5 hurricanes or record heat days, they’ve developed a tool that allows people to see how many “outdoor days” their region might experience from now through 2100 if carbon emissions growth remains unchecked.

The results might be alarming or comforting, depending on where you live.

For people in California or France or Germany, things don’t look so bad. The climate won’t be quite as hospitable in the summers, but it’ll grow a little bit more clement in the spring and fall, adding anywhere from a few days to nearly a month of outdoor weather compared with historical records. The U.K. will be even better off, gaining 40 outdoor days by the end of the century.

Not everyone will come out ahead, though. Some temperate places like New York, Massachusetts, China and Japan will lose a week or more of outdoor days. Elsewhere, the picture looks even more dire. Illinois will lose more than a month of outdoor days by the 2080s as the summers grow unbearably hot. Texas will lose a month and a half for the same reason.

Yet it’s the countries with some of the most vulnerable populations that’ll suffer the most (as scientists have been warning). Nigeria’s summers will grow even hotter and longer, lopping off nearly two months of outdoor days. India will lose almost two and a half months.

It doesn’t have to be that way. Even if the world fails to reach net zero carbon emissions by 2050 — but still manages to by 2070 — the situation will improve dramatically. Both Nigeria and India would only lose one month of outdoor days, and more northerly regions would retain some of their added outdoor days.

Assessing risk

The MIT tool is a relatable application of a field of study known as climate scenario analysis, a branch of strategic planning that seeks to understand how climate change will impact various regions and demographics. It’s not a new field, but as advances in computational power have fostered more sophisticated climate models, it has become more broadly applicable than before.

A range of startups are using this relatively newfound predictive capability to help give shape to an uncertain future.

Many startups in the space are focused on tackling that uncertainty for investors, lenders and insurers. Jupiter Intelligence, Cervest and One Concern all focus on those markets, supplying customers with dashboards and data feeds that they can tailor to regions or even assets of interest. The startups also determine the risk of flood, wildfire and drought, and they’ll deliver reports detailing risk to assets and supply chains. They can also crank out regulatory disclosures, highlighting relevant climate risks.

Investors and insurers are sufficiently worried about how climate change will affect assets and supply chains that these startups have attracted some real cash. Jupiter intelligence has raised $97 million, according to PitchBook, while Cervest has raised $43 million and One Concern has brought in $152 million.

While major financial institutions are an obvious customer base for climate forecasting companies, other markets exposed to the outdoors are also in need of solutions.

ClimateAI is targeting agriculture, including agribusiness, lenders, and food and beverage companies, all of which have watched as droughts, floods and storms have decimated crops. As a result, water risk assessment is a key feature of ClimateAI’s forecasts, though it provides other weather and climate-related data, too. The startup has raised $37 million so far, per PitchBook.

Sensible Weather is working on markets that are a little closer to home for most of us. It provides insurance for people embarking on outdoor events and activities, from live concerts to camping and golfing. It works with campgrounds, golf courses, live event operators and more, allowing them to give customers an option to insure their outing against inclement weather. It’s an approach that’s landed the startup $22 million in funding, according to PitchBook.

As more businesses and consumers become aware of how climate change is affecting their lives, their demand for certainty will create a wealth of new markets that will offer these startups and their peers ample opportunity to expand. Climate scenario analysis, once a niche limited to academic labs and insurance companies, appears poised to enter the mainstream.


Software Development in Sri Lanka

Robotic Automations

Metaview's tool records interview notes so that hiring managers don't have to | TechCrunch


Siadhal Magos and Shahriar Tajbakhsh were working at Uber and Palantir, respectively, when they both came to the realization that hiring — particularly the process of interviewing — was becoming unwieldy for many corporate HR departments.

“It was clear to us that the most important part of the hiring process is the interviews, but also the most opaque and unreliable part,” Magos told TechCrunch. “On top of this, there’s a bunch of toil associated with taking notes and writing up feedback that many interviewers and hiring managers do everything they can to avoid.”

Magos and Tajbakhsh thought that the hiring process was ripe for disruption, but they wanted to avoid abstracting away too much of the human element. So they launched Metaview, an AI-powered note-taking app for recruiters and hiring managers that records, analyzes and summarizes job interviews.

“Metaview is an AI note-taker built specifically for the hiring process,” Magos said. “It helps recruiters and hiring managers focus more on getting to know candidates and less on extracting data from the conversations. As a consequence, recruiters and hiring managers save a ton of time writing up notes and are more present during interviews because they’re not having to multitask.”

Metaview integrates with apps, phone systems, videoconferencing platforms and tools like Calendly and GoodTime to automatically capture the content of interviews. Magos says the platform “accounts for the nuances of recruiting conversations” and “enriches itself with data from other sources,” such as applicant tracking systems, to highlight the most relevant moments.

“Zoom, Microsoft Teams and Google Meet all have transcription built in, which is a possible alternative to Metaview,” Magos said. “But the information that Metaview’s AI pulls out from interviews is far more relevant to the recruiting use case than generic alternatives, and we also assist users with the next steps in their recruiting workflows in and around these conversations.”

Image Credits: Metaview

Certainly, there’s plenty wrong with traditional job interviewing, and a note-taking and conversation-analyzing app like Metaview could help, at least in theory. As a piece in Psychology Today notes, the human brain is rife with biases that hinder our judgement and decision making, for example a tendency to rely too heavily on the first piece of information offered and to interpret information in a way that confirms our preexisting beliefs.

The question is, does Metaview work — and, more importantly, work equally well for all users?

Even the best AI-powered speech dictation systems suffer from their own biases. A Stanford study showed that error rates for Black speakers on speech-to-text services from Amazon, Apple, Google, IBM and Microsoft are nearly double those for white speakers. Another, more recent study published in the journal Computer Speech and Language found statistically significant differences in the way two leading speech recognition models treated speakers of different genders, ages and accents.

There’s also hallucination to consider. AI makes mistakes summarizing, including in meeting summaries. In a recent story, The Wall Street Journal cited an instance where, for one early adopter using Microsoft’s AI Copilot tool for summarizing meetings, Copilot invented attendees and implied calls were about subjects that were never discussed.

When asked what steps Metaview has taken, if any, to mitigate bias and other algorithmic issues, Magos claimed that Metaview’s training data is diverse enough to yield models that “surpass human performance” on recruitment workflows and perform well on popular benchmarks for bias.

I’m skeptical and a bit wary, too, of Metaview’s approach to how it handles speech data. Magos says that Metaview stores conversation data for two years by default unless users request that the data be deleted. That seems like an exceptionally long time.

But none of this appears to have affected Metaview’s ability to get funding or customers.

Metaview this month raised $7 million from investors including Plural, Coelius Capital and Vertex Ventures, bringing the London-based startup’s total raised to $14 million. Metaview’s client count stands at 500 companies, Magos says, including Brex, Quora, Pleo and Improbable — and it’s grown 2,000% year-over-year.

“The money will be used to grow the product and engineering team primarily, and give more fuel to our sales and marketing efforts,” Magos said. “We will triple the product and engineering team, further fine-tune our conversation synthesis engine so our AI is automatically extracting exactly the right information our customers need and develop systems to proactively detect issues like inconsistencies in the interview process and candidates that appear to be losing interest.”


Software Development in Sri Lanka

Robotic Automations

OpenAI built a voice cloning tool, but you can't use it… yet | TechCrunch


As deepfakes proliferate, OpenAI is refining the tech used to clone voices — but the company insists it’s doing so responsibly.

Today marks the preview debut of OpenAI’s Voice Engine, an expansion of the company’s existing text-to-speech API. Under development for about two years, Voice Engine allows users to upload any 15-second voice sample to generate a synthetic copy of that voice. But there’s no date for public availability yet, giving the company time to respond to how the model is used and abused.

“We want to make sure that everyone feels good about how it’s being deployed — that we understand the landscape of where this tech is dangerous and we have mitigations in place for that,” Jeff Harris, a member of the product staff at OpenAI, told TechCrunch in an interview.

Training the model

The generative AI model powering Voice Engine has been hiding in plain sight for some time, Harris said.

The same model underpins the voice and “read aloud” capabilities in ChatGPT, OpenAI’s AI-powered chatbot, as well as the preset voices available in OpenAI’s text-to-speech API. And Spotify’s been using it since early September to dub podcasts for high-profile hosts like Lex Fridman in different languages.

I asked Harris where the model’s training data came from — a bit of a touchy subject. He would only say that the Voice Engine model was trained on a mix of licensed and publicly available data.

Models like the one powering Voice Engine are trained on an enormous number of examples — in this case, speech recordings — usually sourced from public sites and data sets around the web. Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.

OpenAI is already being sued over allegations the company violated IP law by training its AI on copyrighted content, including photos, artwork, code, articles and e-books, without providing the creators or owners credit or pay.

OpenAI has licensing agreements in place with some content providers, like Shutterstock and the news publisher Axel Springer, and allows webmasters to block its web crawler from scraping their site for training data. OpenAI also lets artists “opt out” of and remove their work from the data sets that the company uses to train its image-generating models, including its latest DALL-E 3.

But OpenAI offers no such opt-out scheme for its other products. And in a recent statement to the U.K.’s House of Lords, OpenAI suggested that it’s “impossible” to create useful AI models without copyrighted material, asserting that fair use — the legal doctrine that allows for the use of copyrighted works to make a secondary creation as long as it’s transformative — shields it where it concerns model training.

Synthesizing voice

Surprisingly, Voice Engine isn’t trained or fine-tuned on user data. That’s owing in part to the ephemeral way in which the model — a combination of a diffusion process and transformer — generates speech.

“We take a small audio sample and text and generate realistic speech that matches the original speaker,” said Harris. “The audio that’s used is dropped after the request is complete.”

As he explained it, the model is simultaneously analyzing the speech data it pulls from and the text data meant to be read aloud, generating a matching voice without having to build a custom model per speaker.

It’s not novel tech. A number of startups have delivered voice cloning products for years, from ElevenLabs to Replica Studios to Papercup to Deepdub to Respeecher. So have Big Tech incumbents such as Amazon, Google and Microsoft — the last of which is a major OpenAI’s investor incidentally.

Harris claimed that OpenAI’s approach delivers overall higher-quality speech.

We also know it will be priced aggressively. Although OpenAI removed Voice Engine’s pricing from the marketing materials it published today, in documents viewed by TechCrunch, Voice Engine is listed as costing $15 per one million characters, or ~162,500 words. That would fit Dickens’ “Oliver Twist” with a little room to spare. (An “HD” quality option costs twice that, but confusingly, an OpenAI spokesperson told TechCrunch that there’s no difference between HD and non-HD voices. Make of that what you will.)

That translates to around 18 hours of audio, making the price somewhat south of $1 per hour. That’s indeed cheaper than what one of the more popular rival vendors, ElevenLabs, charges — $11 for 100,000 characters per month. But it does come at the expense of some customization.

Voice Engine doesn’t offer controls to adjust the tone, pitch or cadence of a voice. In fact, it doesn’t offer any fine-tuning knobs or dials at the moment, although Harris notes that any expressiveness in the 15-second voice sample will carry on through subsequent generations (for example, if you speak in an excited tone, the resulting synthetic voice will sound consistently excited). We’ll see how the quality of the reading compares with other models when they can be compared directly.

Voice talent as commodity

Voice actor salaries on ZipRecruiter range from $12 to $79 per hour — a lot more expensive than Voice Engine, even on the low end (actors with agents will command a much higher price per project). Were it to catch on, OpenAI’s tool could commoditize voice work. So, where does that leave actors?

The talent industry wouldn’t be caught unawares, exactly — it’s been grappling with the existential threat of generative AI for some time. Voice actors are increasingly being asked to sign away rights to their voices so that clients can use AI to generate synthetic versions that could eventually replace them. Voice work — particularly cheap, entry-level work — is at risk of being eliminated in favor of AI-generated speech.

Now, some AI voice platforms are trying to strike a balance.

Replica Studios last year signed a somewhat contentious deal with SAG-AFTRA to create and license copies of the media artist union members’ voices. The organizations said that the arrangement established fair and ethical terms and conditions to ensure performer consent while negotiating terms for uses of synthetic voices in new works, including video games.

ElevenLabs, meanwhile, hosts a marketplace for synthetic voices that allows users to create a voice, verify and share it publicly. When others use a voice, the original creators receive compensation — a set dollar amount per 1,000 characters.

OpenAI will establish no such labor union deals or marketplaces, at least not in the near term, and requires only that users obtain “explicit consent” from the people whose voices are cloned, make “clear disclosures” indicating which voices are AI-generated and agree not to use the voices of minors, deceased people or political figures in their generations.

“How this intersects with the voice actor economy is something that we’re watching closely and really curious about,” Harris said. “I think that there’s going to be a lot of opportunity to sort of scale your reach as a voice actor through this kind of technology. But this is all stuff that we’re going to learn as people actually deploy and play with the tech a little bit.”

Ethics and deepfakes

Voice cloning apps can be — and have been — abused in ways that go well beyond threatening the livelihoods of actors.

The infamous message board 4chan, known for its conspiratorial content, used ElevenLabs’ platform to share hateful messages mimicking celebrities like Emma Watson. The Verge’s James Vincent was able to tap AI tools to maliciously, quickly clone voices, generating samples containing everything from violent threats to racist and transphobic remarks. And over at Vice, reporter Joseph Cox documented generating a voice clone convincing enough to fool a bank’s authentication system.

There are fears bad actors will attempt to sway elections with voice cloning. And they’re not unfounded: In January, a phone campaign employed a deepfaked President Biden to deter New Hampshire citizens from voting — prompting the FCC to move to make future such campaigns illegal.

So aside from banning deepfakes at the policy level, what steps is OpenAI taking, if any, to prevent Voice Engine from being misused? Harris mentioned a few.

First, Voice Engine is only being made available to an exceptionally small group of developers — around 10 — to start. OpenAI is prioritizing use cases that are “low risk” and “socially beneficial,” Harris says, like those in healthcare and accessibility, in addition to experimenting with “responsible” synthetic media.

A few early Voice Engine adopters include Age of Learning, an edtech company that’s using the tool to generate voice-overs from previously cast actors, and HeyGen, a storytelling app leveraging Voice Engine for translation. Livox and Lifespan are using Voice Engine to create voices for people with speech impairments and disabilities, and Dimagi is building a Voice Engine-based tool to give feedback to health workers in their primary languages.

Here’s generated voices from Lifespan:


And here’s one from Livox:

Second, clones created with Voice Engine are watermarked using a technique OpenAI developed that embeds inaudible identifiers in recordings. (Other vendors including Resemble AI and Microsoft employ similar watermarks.) Harris didn’t promise that there aren’t ways to circumvent the watermark, but described it as “tamper resistant.”

“If there’s an audio clip out there, it’s really easy for us to look at that clip and determine that it was generated by our system and the developer that actually did that generation,” Harris said. “So far, it isn’t open sourced — we have it internally for now. We’re curious about making it publicly available, but obviously, that comes with added risks in terms of exposure and breaking it.”

Third, OpenAI plans to provide members of its red teaming network, a contracted group of experts that help inform the company’s AI model risk assessment and mitigation strategies, access to Voice Engine to suss out malicious uses.

Some experts argue that AI red teaming isn’t exhaustive enough and that it’s incumbent on vendors to develop tools to defend against harms that their AI might cause. OpenAI isn’t going quite that far with Voice Engine — but Harris asserts that the company’s “top principle” is releasing the technology safely.

General release

Depending on how the preview goes and the public reception to Voice Engine, OpenAI might release the tool to its wider developer base, but at present, the company is reluctant to commit to anything concrete.

Harris did give a sneak peek at Voice Engine’s roadmap, though, revealing that OpenAI is testing a security mechanism that has users read randomly generated text as proof that they’re present and aware of how their voice is being used. This could give OpenAI the confidence it needs to bring Voice Engine to more people, Harris said — or it might just be the beginning.

“What’s going to keep pushing us forward in terms of the actual voice matching technology is really going to depend on what we learn from the pilot, the safety issues that are uncovered and the mitigations that we have in place,” he said. “We don’t want people to be confused between artificial voices and actual human voices.”

And on that last point we can agree.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber