From Digital Age to Nano Age. WorldWide.

Tag: video

Robotic Automations

Watch it and weep (or smile): Synthesia's AI video avatars now feature emotions | TechCrunch


Generative AI has captured the public imagination with a leap into creating elaborate, plausibly real text and imagery out of verbal prompts. But the catch — and there is often a catch — is that the results are often far from perfect when you look a little closer.

People point out strange fingers, floor tiles slip away, and math problems are precisely that: problematically, sometimes they don’t add up.

Now, Synthesia — one of the ambitious AI startups working in video, specifically custom avatars designed for business users to create promotional, training and other enterprise video content — is releasing an update that it hopes will help it leapfrog over some of the challenges in its particular field. Its latest version features avatars — built based on actual humans captured in their studio — which provide more emotion, better lip tracking and what it says are more expressive natural and human movements when they are fed text to generate videos.

The release is coming on the heels of some impressive progress for the company to date. Unlike other generative AI players like OpenAI, which has built a two-pronged strategy — raising huge public awareness with consumer tools like ChatGPT while also building out a B2B offering, with its APIs used by independent developers as well as giant enterprises — Synthesia is leaning into the approach that some other prominent AI startups are taking.

Similar to how Perplexity’s focus on really nailing generative AI search, Synthesia is focused on really nailing how to build the most humanlike generative video avatars possible. More specifically, it is looking to do this only for the business market and use cases like training and marketing.

That focus has helped Synthesia stand out in what is become a very crowded market in AI that runs the risk of getting commoditized when hype settles down into more long-term concerns like ARR, unit economics and operational costs attached to AI implementations.

Synthesia describes its new Expressive Avatars, the version being released today, as a first of their kind: “The world’s first avatars fully generated with AI.” Built on large, pre-trained models, Synthesia says its breakthrough has been in how they are combined to achieve multimodal distributions that more closely mimic how actual humans speak.

These are generated on the fly, Synthesia says, which is meant to be closer to the experience we go through when we speak or react in life, and stands in contrast to how a lot of AI video tools based around avatars work today: typically these are actually many pieces of video that get quickly stitched together to create facial responses that line up, more or less, with the scripts that are fed into them. The aim is to appear less robotic, and more lifelike.

Previous version:

New version:

As you can see in the two examples here, one from Synthesia’s older version and the one being released today, there is still a ways to go still in development, something CEO Victor Riparbelli himself also admits.

“Of course its not 100% there yet, but it will be very, very soon, by the end of the year. It’ll be so mind blowing,” he told TechCrunch. “I think you can also see that the AI part of this is very subtle. With humans there’s so much information in the tiniest details, the tiniest like movements of our facial muscles. I think we could never sit down and describe, ‘yes you smile like this when you’re happy but that is fake right?’ That is such a complex thing to ever describe for humans, but it can be [captured in] deep learning networks. They’re actually able to figure out the pattern and then replicate it in a predictable way.” Next thing it’s working on, he added, is hands.

“Hands are like, super hard,” he added.

The focus on B2B also helps Synthesia anchor its messaging and product more on “safe” AI usage. That is essential especially with the huge concern today over deepfakes and using AI for malicious purposes like misinformation and fraud. Even so, Synthesia hasn’t managed to avoid controversy on that front altogether. As we’ve pointed out before, Synthesia’s tech has previously been misused to produce propaganda in Venezuela and false news reports promoted by pro-China social media accounts.

The company today noted that it has taken further steps to try to lock down that usage. Last month, it updated its policies, it said, “to restrict the type of content people can make, investing in the early detection of bad faith actors, increasing the teams that work on AI safety, and experimenting with content credentials technologies such as C2PA.”

Despite those challenges, the company has continued to grow.

Synthesia was last valued at $1 billion when it raised $90 million. Notably, that fundraise was almost a year ago, in June 2023.

Riparbelli (pictured above, right, with other co-founders Steffen Tjerrild, Professor Lourdes Agapito, Professor Matthias Niessner) said in an interview earlier this month that there are currently no plans to raise more, although that doesn’t really answer the question of whether Synthesia is getting proactively approached. (Note: we are very excited to have the actual human Riparbelli speaking at an event of ours in London in May, where I’m definitely going to ask about this again. Please come if you’re in town.)

What we do know for sure is that AI costs a lot of money to build and run, and Synthesia has been building and running a lot.

Prior to the launch of today’s version some 200,000 people have created more than 18 million video presentations across some 130 languages using Synthesia’s 225 legacy avatars, the company said. (It does not break out how many users are on its paid tiers, but there are a lot of big-name customers including Zoom, the BBC, DuPont and more, and enteprises do pay.) The startup’s hope, of course, is that with the new version getting pushed out today those numbers will go up even more.


Software Development in Sri Lanka

Robotic Automations

Amazon brings its 'Amazon Live' shoppable livestreams to Prime Video and Freevee | TechCrunch


Amazon is trying to keep live shopping relevant with the launch of an “Amazon Live” FAST (free ad-supported TV) channel on Prime Video and Freevee. Previously only available as a feature on desktop, mobile and Fire TV, the new live channel will give customers in the U.S. more ways to engage with interactive, shoppable content.

Amazon Live’s FAST channel will feature 24/7 programming from popular creators and celebrities, such as reality TV stars Lala Kent (“Vanderpump Rules”) and Paige DeSorbo (“Southern Charm”), who is also launching her own original show on Amazon Live, where she’ll develop brand new content. Brands like Tastemade and The Bump will also host streams to sell their products.

Viewers can browse and buy the items influencers show off by using the Amazon Shopping app on their mobile device. When entering “shop the show” into the search bar, users are directed in real time to a shopping carousel featuring the products they see on TV.

Image Credits: Amazon

This isn’t the first time Prime Video has introduced an e-commerce shopping experience on the streamer. To promote “The Boys” spinoff series “Gen V,” Amazon launched a virtual store selling merchandise and home goods based on Godolkin University, the superhero school in the show.

Last year, QVC and HSN — the top two shopping channels — launched linear offerings on Freevee, which were the only livestream shopping channels on the service at the time.

Amazon Live launched in 2019 as a QVC-like shopping experience to help brands get their products discovered and for talent to interact with fans. It rolled out the offering to customers in India in 2022. According to the company, more than 1 billion customers in the U.S. and India streamed Amazon Live’s shoppable videos in 2023 alone.

Despite Amazon’s success with live shopping, the format only makes up a small percentage of the e-commerce market. Last year, live shopping was anticipated to be worth $31.7 billion, however, total U.S. online retail sales reportedly reached $1.14 trillion.


Software Development in Sri Lanka

Robotic Automations

X is launching a TV app for videos 'soon' | TechCrunch


X, the company formerly known as Twitter, is launching a dedicated TV app for videos uploaded to the social network soon. X CEO Linda Yaccarino announced on Tuesday that the new app will bring “real-time, engaging content to your smart TVs.” The app’s interface looks quite similar to YouTube’s, as seen in a teaser video shared by Yaccarino.

The app will feature a trending video algorithm that is designed to help users stay updated with tailored popular content, along with AI-powered topics that will organize videos by subject. The app will also support cross-device viewing, which means you can start watching a video on your phone and then continue watching it on your TV.

Yaccarino says the app will feature enhanced video search and be available on “most smart TVs.” Although there isn’t an official launch date for the app, the executive says it will be available “soon.”

The upcoming app launch is part of Yaccarino’s efforts to turn the social media site into a free-speech “video first” platform. The social network currently features an original show hosted by former congresswoman Tulsi Gabbard and another by former Fox Sports host Jim Rome. Last month, Musk canceled a talk show deal with former CNN anchor Don Lemon after he was interviewed for the first episode of the show.

The announcement comes a week after Truth Social, the social media platform owned by Donald Trump’s media company, also unveiled its plans to launch a live TV streaming platform. The platform will focus on “news networks” and “religious channels,” along with “ content that has been canceled” or “is being suppressed on other platforms and services,” the company had said.




Software Development in Sri Lanka

Robotic Automations

Quibi redux? Short drama apps saw record revenue in Q1 2024 | TechCrunch


Was Quibi just ahead of its time? Quibi founder Jeffrey Katzenberg ultimately blamed the COVID-19 pandemic for the failure of his short-form video app, but maybe it was just too soon. New app store data indicates that the idea Quibi popularized — original shows cut into short clips, offering quick entertainment — is now making a comeback. In the first quarter of 2024, 66 short drama apps like ReelShort and DramaBox pulled in record revenue of $146 million in global consumer spending.

This represents an over 8,000% increase from $1.8 million in the first quarter of 2023, when just 21 apps were available, according to data from app intelligence firm Appfigures. Since then, 45 more apps have joined the market, earning approximately $245 million in gross consumer spending and reaching some 121 million downloads.

Image Credits: Appfigures

In March 2024 alone, consumers spent $65 million on short drama apps, a 10,500% increase from the $619,000 spent in March 2023.

It appears the revenue growth began to accelerate in fall 2023, per Appfigures data, leading to a huge revenue jump between February and March of this year, when global revenue grew 56% to reach $65.7 million, up from $42 million. In part, the revenue growth is tied to the larger number of apps available, of course, but marketing, ad spend and consumer interest also played a role.

The top apps by revenue — ReelShort (No. 1) and DramaBox (No. 2) — generated $52 million and $35 million in Q1 2024, respectively. That’s around 37% and 24% of the revenue generated by the top 10 apps, respectively.

The No. 3 app ShortTV grossed $17 million globally in Q1, or 12% of the total.

What’s interesting about these apps, compared with Quibi’s earlier attempt to carve out a niche in this space, is the content quality. That is, it’s much, much worse than Quibi’s — and Quibi’s was not always great. As TechCrunch wrote last year when describing ReelShort, the stories in the app are “like snippets from low-quality soaps — or as if those mobile storytelling games came to life.”

Regardless of the terrible acting and writing, the apps have seemingly found a bit of an audience.

Image Credits: Appfigures

By both installs and revenue, the U.S. is by far the leader in terms of top markets for this cohort. But overall, the charts vary in terms of which countries are downloading versus paying for the content.

By installs, the top markets after the U.S. are Indonesia, India, the Philippines and Brazil, while the U.K., Australia, Canada and the Philippines make up the top markets by revenue, beyond the U.S.

In Q1 2024, short drama apps were installed nearly 37 million times, up 992% from 3.4 million in Q1 2023. By downloads, ReelShort and ShortTV are the top two apps, with the former accounting for 37% of installs, or 13.3 million, and the latter with 10 million installs, or 27%. DramaBox, No. 2 by consumer spending, was No. 3 by installs with 7 million (19%) downloads.

Image Credits: Appfigures

Mirroring wider app store trends, the majority of the revenue (63%) is generated on iOS, while Android accounts for the majority (67%) of downloads.

Though there’s growth in this market, these apps see nowhere near the attraction that their nearest competitors — short-form video and streaming video — do. Short drama apps claimed a 6.7% share of the total across all three categories combined, up from 0.15% a year ago. But the wider video app market makes a lot more money.

For instance, the top 10 apps across the combined three categories, which include apps like TikTok and Disney+, made $1.8 billion in Q1.

Image Credits: Appfigures

Image Credits: Appfigures


Software Development in Sri Lanka

Robotic Automations

New Google Vids product helps create a customized video with an AI assist | TechCrunch


All of the major vendors have been looking at ways to use AI to help customers develop creative content. On Tuesday at the Google Cloud Next customer conference in Las Vegas, Google introduced a new AI-fueled video creation tool called Google Vids. The tool will become part of the Google Workspace productivity suite when it’s released.

“I want to share something really entirely new. At Google Cloud Next, we’re unveiling Google Vids, a brand new, AI-powered video creation app for work,” Aparna Pappu, VP & GM at Google Workspace said, introducing the tool.

Image Credits: Frederic Lardinois/TechCrunch

The idea is to provide a video creation tool alongside other Workspace tools like Docs and Sheets with a similar ability to create and collaborate in the browser, except in this case, on video. “This is your video editing, writing and production assistant, all in one,” Pappu said. “We help transform the assets you already have — whether marketing copy or images or whatever else in your drive — into a compelling video.”

Like other Google Workspace tools, you can collaborate with colleagues in real time in the browser. “No need to email files back and forth. You and your team can work on the story at the same time with all the same access controls and security that we provide for all of Workspace,” she said.

Image Credits: Google Cloud

Examples of the kinds of videos people are creating with Google Vids include product pitches, training content or celebratory team videos. Like most generative AI tooling, Google Vids starts with a prompt. You enter a description of what you want the video to look like. You can then access files in your Google Drive or use stock content provided by Google and the AI goes to work, creating a storyboard of the video based on your ideas.

You can then reorder the different parts of the video, add transitions, select a template and insert an audio track where you record the audio or add a script and a preset voice will read it. Once you’re satisfied, you can generate the video. Along the way colleagues can comment or make changes, just as with any Google Workspace tool.

Google Vids is currently in limited testing. In June it will roll out to additional testers in Google Labs and will eventually be available for customers with Gemini for Workspace subscriptions.

Image Credits: Frederic Lardinois/TechCrunch


Software Development in Sri Lanka

Robotic Automations

Google releases Imagen 2, a video clip generator | TechCrunch


Google doesn’t have the best track record when it comes to image-generating AI.

In February, the image generator built into Gemini, Google’s AI-powered chatbot, was found to be randomly injecting gender and racial diversity into prompts about people, resulting in images of racially diverse Nazis, among other offensive inaccuracies.

Google pulled the generator, vowing to improve it and eventually re-release it. As we await its return, the company’s launching an enhanced image-generating tool, Imagen 2, inside its Vertex AI developer platform — albeit a tool with a decidedly more enterprise bent. Google announced Imagen 2 at its annual Cloud Next conference in Las Vegas.

Image Credits: Frederic Lardinois/TechCrunch

Imagen 2 — which is actually a family of models, launched in December after being previewed at Google’s I/O conference in May 2023 — can create and edit images given a text prompt, like OpenAI’s DALL-E and Midjourney. Of interest to corporate types, Imagen 2 can render text, emblems and logos in multiple languages, optionally overlaying those elements in existing images — for example, onto business cards, apparel and products.

After launching first in preview, image editing with Imagen 2 is now generally available in Vertex AI along with two new capabilities: inpainting and outpainting. Inpainting and outpainting, features other popular image generators such as DALL-E have offered for some time, can be used to remove unwanted parts of an image, add new components and expand the borders of an image to create a wider field of view.

But the real meat of the Imagen 2 upgrade is what Google’s calling “text-to-live images.”

Imagen 2 can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like Runway, Pika and Irreverent Labs. True to Imagen 2’s corporate focus, Google’s pitching live images as a tool for marketers and creatives, such as a GIF generator for ads showing nature, food and animals — subject matter that Imagen 2 was fine-tuned on.

Google says that live images can capture “a range of camera angles and motions” while “supporting consistency over the entire sequence.” But they’re in low resolution for now: 360 pixels by 640 pixels. Google’s pledging that this will improve in the future. 

To allay (or at least attempt to allay) concerns around the potential to create deepfakes, Google says that Imagen 2 will employ SynthID, an approach developed by Google DeepMind, to apply invisible, cryptographic watermarks to live images. Of course, detecting these watermarks — which Google claims are resilient to edits, including compression, filters and color tone adjustments — requires a Google-provided tool that’s not available to third parties.

And no doubt eager to avoid another generative media controversy, Google’s emphasizing that live image generations will be “filtered for safety.” A spokesperson told TechCrunch via email: “The Imagen 2 model in Vertex AI has not experienced the same issues as the Gemini app. We continue to test extensively and engage with our customers.”

Image Credits: Frederic Lardinois/TechCrunch

But generously assuming for a moment that Google’s watermarking tech, bias mitigations and filters are as effective as it claims, are live images even competitive with the video generation tools already out there?

Not really.

Runway can generate 18-second clips in much higher resolutions. Stability AI’s video clip tool, Stable Video Diffusion, offers greater customizability (in terms of frame rate). And OpenAI’s Sora — which, granted, isn’t commercially available yet — appears poised to blow away the competition with the photorealism it can achieve.

So what are the real technical advantages of live images? I’m not really sure. And I don’t think I’m being too harsh.

After all, Google is behind genuinely impressive video generation tech like Imagen Video and Phenaki. Phenaki, one of Google’s more interesting experiments in text-to-video, turns long, detailed prompts into two-minute-plus “movies” — with the caveat that the clips are low resolution, low frame rate and only somewhat coherent.

In light of recent reports suggesting that the generative AI revolution caught Google CEO Sundar Pichai off guard and that the company’s still struggling to maintain pace with rivals, it’s not surprising that a product like live images feels like an also-ran. But it’s disappointing nonetheless. I can’t help the feeling that there is — or was — a more impressive product lurking in Google’s skunkworks.

Models like Imagen are trained on an enormous number of examples usually sourced from public sites and datasets around the web. Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.

I asked, as I always do around announcements pertaining to generative AI models, about the data that was used to train the updated Imagen 2, and whether creators whose work might’ve been swept up in the model training process will be able to opt out at some future point.

Google told me only that its models are trained “primarily” on public web data, drawn from “blog posts, media transcripts and public conversation forums.” Which blogs, transcripts and forums? It’s anyone’s guess.

A spokesperson pointed to Google’s web publisher controls that allow webmasters to prevent the company from scraping data, including photos and artwork, from their websites. But Google wouldn’t commit to releasing an opt-out tool or, alternatively, compensating creators for their (unknowing) contributions — a step that many of its competitors, including OpenAI, Stability AI and Adobe, have taken.

Another point worth mentioning: Text-to-live images isn’t covered by Google’s generative AI indemnification policy, which protects Vertex AI customers from copyright claims related to Google’s use of training data and outputs of its generative AI models. That’s because text-to-live images is technically in preview; the policy only covers generative AI products in general availability (GA).

Regurgitation, or where a generative model spits out a mirror copy of an example (e.g., an image) that it was trained on, is rightly a concern for corporate customers. Studies both informal and academic have shown that the first-gen Imagen wasn’t immune to this, spitting out identifiable photos of people, artists’ copyrighted works and more when prompted in particular ways.

Barring controversies, technical issues or some other major unforeseen setbacks, text-to-live images will enter GA somewhere down the line. But with live images as it exists today, Google’s basically saying: use at your own risk.


Software Development in Sri Lanka

Robotic Automations

Uber Eats launches a TikTok-like video feed to boost discovery | TechCrunch


Uber Eats is launching a TikTok-like short-form video feed to boost discovery and help restaurants showcase their dishes. Uber Eats’ senior director of Product, Awaneesh Verma, told TechCrunch exclusively in an interview that the new feed is being tested in New York, San Francisco and Toronto. The company plans to launch the feed worldwide in the future.

With this launch, Uber Eats now joins numerous other popular apps that have launched their own short-form video feeds following TikTok’s rise in popularity, including Instagram, YouTube, Snapchat and Netflix to name a few. TechCrunch also recently learned that LinkedIn has started experimenting with its own TikTok-like feed.

The new Uber Eats short-form videos are visible in carousels placed across the app, including the homescreen. Once you click on a video preview, you will enter into a vertical feed of short-form content that you can swipe through. You will only see content from restaurants that are close enough to deliver to you.

Verma says the feed is designed to replicate the experience of being in a restaurant in person and seeing people preparing food and being inspired to try something new. As you swipe through the feed, you may come across a video of an ice cream shop preparing a Nutella milkshake, or a video of an Indian restaurant packing rice separately from curry so it doesn’t get soggy by the time it gets delivered to your house.

“The early data shows people are much more confident trying new dishes and trying things that they otherwise wouldn’t have,” Verma said. “Even little things like being able to see texture, and the details of what a portion size looks like, or what’s in a dish, has been really inspiring for our users.”

Image Credits: Uber Eats

Uber Eats notes that the videos aren’t ads, as the company isn’t charging merchants for the content placements.

Many restaurants run social media accounts on apps like Instagram and TikTok to reach new customers and showcase their food using short-form videos. By allowing merchants to share short-form videos directly in the Uber Eats app, the company is helping restaurants reach customers directly as they decide what to order. As for consumers, many people already use social media to discover new places and dishes to try, so Uber Eats likely hopes that its new feed will encourage users to try to find inspiration directly within its own app.

Some users might not see the launch as a welcome addition to the app, as they may feel overwhelmed by the sheer amount of different short-form video feeds in popular apps. While it may make sense to have short-form video feeds in entertainment and social media apps, the introduction of one in a food-delivery app may not be a favorable choice for some.

Verma also shared that in order to further support merchants, the company has revamped its Uber Eats Manager software and added personalized growth recommendations. The software is now capable of encouraging restaurants to grow their business by doing things like running a promotion on a certain dish or adding photos to menu listings.

In addition, the company is going to launch an entirely new app for restaurant managers this summer that is designed to make it easier for restaurants to be more proactive on the go. For instance, the app could alert a restaurant manager that their store is having issues or that they may want to boost sales with new ads.

Uber Eats announced on Monday that it now has more than 1 million merchants around the world on its platform, across 11,000 cities in six continents.


Software Development in Sri Lanka

Robotic Automations

Adobe's working on generative video, too | TechCrunch


Adobe says it’s building an AI model to generate video. But it’s not revealing when this model will launch, exactly — or much about it besides the fact that it exists.

Offered as an answer of sorts to OpenAI’s Sora, Google’s Imagen 2 and models from the growing number of startups in the nascent generative AI video space, Adobe’s model — a part of the company’s expanding Firefly family of generative AI products — will make its way into Premiere Pro, Adobe’s flagship video editing suite, sometime later this year, Adobe says.

Like many generative AI video tools today, Adobe’s model creates footage from scratch (either a prompt or reference images) — and it powers three new features in Premiere Pro: object addition, object removal and generative extend.

They’re pretty self-explanatory.

Object addition lets users select a segment of a video clip — the upper third, say, or lower left corner — and enter a prompt to insert objects within that segment. In a briefing with TechCrunch, an Adobe spokesperson showed a still of a real-world briefcase filled with diamonds generated by Adobe’s model.

AI-generated diamonds, courtesy of Adobe.

Object removal removes objects from clips, like boom mics or coffee cups in the background of a shot.

Removing objects with AI. Notice the results aren’t quite perfect.

As for generative extend, it adds a few frames to the beginning or end of a clip (unfortunately, Adobe wouldn’t say how many frames). Generative extend isn’t meant to create whole scenes, but rather add buffer frames to sync up with a soundtrack or hold on to a shot for an extra beat — for instance to add emotional heft.

Image Credits: Adobe

To address fears of deepfakes that inevitably crops up around generative AI tools such as these, Adobe says it’s bringing Content Credentials — metadata to identify AI-generated media — to Premiere. Content Credentials, a media provenance standard that Adobe backs through its Content Authenticity Initiative, were already in Photoshop and a component of Adobe’s image-generating Firefly models. In Premiere, they’ll indicate not only which content was AI-generated but which AI model was used to generate it.

I asked Adobe what data — images, videos and so on — were used to train the model. The company wouldn’t say, nor would it say how (or whether) it’s compensating contributors to the data set.

Last week, Bloomberg, citing sources familiar with the matter, reported that Adobe’s paying photographers and artists on its stock media platform, Adobe Stock, up to $120 for submitting short video clips to train its video generation model. The pay’s said to range from around $2.62 per minute of video to around $7.25 per minute depending on the submission, with higher-quality footage commanding correspondingly higher rates.

That’d be a departure from Adobe’s current arrangement with Adobe Stock artists and photographers whose work it’s using to train its image generation models. The company pays those contributors an annual bonus, not a one-time fee, depending on the volume of content they have in Stock and how it’s being used — albeit a bonus that’s subject to an opaque formula and not guaranteed from year to year.

Bloomberg’s reporting, if accurate, depicts an approach in stark contrast to that of generative AI video rivals like OpenAI, which is said to have scraped publicly available web data — including videos from YouTube — to train its models. YouTube’s CEO, Neal Mohan, recently said that use of YouTube videos to train OpenAI’s text-to-video generator would be an infraction of the platform’s terms of service, highlighting the legal tenuousness of OpenAI’s and others’ fair use argument.

Companies including OpenAI are being sued over allegations that they’re violating IP law by training their AI on copyrighted content without providing the owners credit or pay. Adobe seems to be intent on avoiding this end, like its sometime generative AI competition Shutterstock and Getty Images (which also have arrangements to license model training data), and — with its IP indemnity policy — positioning itself as a verifiably “safe” option for enterprise customers.

On the subject of payment, Adobe isn’t saying how much it’ll cost customers to use the upcoming video generation features in Premiere; presumably, pricing’s still being hashed out. But the company did reveal that the payment scheme will follow the generative credits system established with its early Firefly models.

For customers with a paid subscription to Adobe Creative Cloud, generative credits renew beginning each month, with allotments ranging from 25 to 1,000 per month depending on the plan. More complex workloads (e.g. higher-resolution generated images or multiple-image generations) require more credits, as a general rule.

The big question in my mind is, will Adobe’s AI-powered video features be worth whatever they end up costing?

The Firefly image generation models so far have been widely derided as underwhelming and flawed compared to Midjourney, OpenAI’s DALL-E 3 and other competing tools. The lack of release time frame on the video model doesn’t instill a lot of confidence that it’ll avoid the same fate. Neither does the fact that Adobe declined to show me live demos of object addition, object removal and generative extend — insisting instead on a prerecorded sizzle reel.

Perhaps to hedge its bets, Adobe says that it’s in talks with third-party vendors about integrating their video generation models into Premiere, as well, to power tools like generative extend and more.

One of those vendors is OpenAI.

Adobe says it’s collaborating with OpenAI on ways to bring Sora into the Premiere workflow. (An OpenAI tie-up makes sense given the AI startup’s overtures to Hollywood recently; tellingly, OpenAI CTO Mira Murati will be attending the Cannes Film Festival this year.) Other early partners include Pika, a startup building AI tools to generate and edit videos, and Runway, which was one of the first vendors market with a generative video model.

An Adobe spokesperson said the company would be open to working with others in the future.

Now, to be crystal clear, these integrations are more of a thought experiment than a working product at present. Adobe stressed to me repeatedly that they’re in “early preview” and “research” rather than a thing customers can expect to play with anytime soon.

And that, I’d say, captures the overall tone of Adobe’s generative video presser.

Adobe’s clearly trying to signal with these announcements that it’s thinking about generative video, if only in the preliminary sense. It’d be foolish not to — to be caught flat-footed in the generative AI race is to risk losing out on a valuable potential new revenue stream, assuming the economics eventually work out in Adobe’s favors. (AI models are costly to train, run and serve after all.)

But what it’s showing — concepts — isn’t super compelling frankly. With Sora in the wild and surely more innovations coming down the pipeline, the company has much to prove.


Software Development in Sri Lanka

Robotic Automations

Facebook takes on TikTok with a new, vertical-first video player | TechCrunch


Facebook is introducing a new, full-screen video player on Wednesday, which offers a more consistent design and experience for all types of video lengths, including short-form Reels, long-form videos and even Live content. The upgraded player, which will first launch in the U.S. and Canada, aims to streamline the experience for both watching and sharing video content. But more importantly, it will default to showing videos in vertical mode and will also allow Facebook to recommend the most relevant video to watch next, no matter what type of video that may be: long, short or live.

The latter change could potentially affect key factors that creators and advertisers care about, like watch time, number of views, reach and more. For Facebook, meanwhile, more people watching videos on the platform could allow it to increase time onsite, plus advertising views and clicks, among other things. It also gives Facebook a way to better compete against other popular video platforms that rely on algorithmic recommendations, like YouTube and TikTok, as it broadens the pool of possible recommendations to include more video formats.

Image Credits: Meta

These improved recommendations will also appear outside the player, on the Facebook Feed and Video tab. In addition, Facebook said it will show users more Reels going forward, given the demand for short-form video.

Facebook says its upgraded player will also offer new controls like a full-screen mode for horizontal videos and a slider to skip around in longer videos. Plus, users will be able to tap on the video to bring up more options to do things, like pause and jump back or forward 10 seconds.

Image Credits: Meta

Notably, the player will default to showing videos in vertical mode, like TikTok, though users will be able to access a full-screen option for horizontal videos that allows them to flip to watch in landscape mode. TikTok, by comparison, has also tested horizontal videos and long-form content of 30 minutes as it looks to compete with YouTube and other sites.

Facebook says the decision to prioritize the smartphone-driven vertical video format came about because it’s seen a shift in video consumption, where much of the viewing now takes place on mobile.

Facebook’s player will first roll out to iOS and Android devices in the U.S. and Canada before expanding globally in the months ahead.

An improved video-playing experience could potentially help Facebook capture the attention of a younger audience, too.

Image Credits: Meta

Although Facebook has declined in popularity with Gen Z over the past decade, The New York Times recently reported that many young people are now turning to the site for its Marketplace. That offers Facebook the opportunity to try to capture their attention in other ways, while on the site, including through Gen Z’s preferred social format, video.

There are other hints that young people are starting to rediscover Facebook, too. A report by NBC News indicated that Gen Z was boosting the “Facebook poke” — a long-forgotten gesture that was a simple way of saying hi. In March, Facebook announced that it had seen a 13x spike in pokes over the past month, for example.

The timing of the video player change also comes at a time when U.S. lawmakers are weighing a possible TikTok ban, which, if enacted, could increase video consumption on other social platforms.


Software Development in Sri Lanka

Robotic Automations

Storiaverse launches a short-form storytelling app that combines video and written content | TechCrunch


Agnes Kozera and David Kierzkowski, the co-founders of podcast sponsorship marketplace Podcorn, today launched their newest app — Storiaverse, a short-form entertainment platform that offers a multi-format reading experience, combining animated video and written content.

Available on iOS and Android devices, Storiaverse caters to graphic novel readers and adult animation fans who want to discover original stories in a short-form, animated format.

“Our mission is to make Storiaverse the biggest storytelling platform and to make reading more immersive and engaging,” Kozera, who also co-founded YouTube marketing platform FameBit (which Google acquired in 2016), told TechCrunch.

“We believe our format not only caters to existing fans of literature and animation but also has the potential to attract wider audiences that are seeking new forms of entertainment…Even people who have shied away from reading because they are more [visual readers] can enjoy reading through our patent-pending read-watch format,” she said.

Image Credits: Storiaverse

Storiaverse’s “Read-Watch” format is exactly how it sounds. Users swipe up on a story to watch a series of animated clips, then tap on the screen to enter reading mode. There’s also an option to skip the videos if they prefer reading all the chapters first and then going back to view the animation. Stories range in length, from five minutes to 10.

At launch, Storiaverse offers 25 original titles spanning genres such as science fiction, fantasy, horror, mystery and comedy. Creators who released stories on the app include animator Josh Ryba, who has contributed to projects such as popular TV shows “Raised by Wolves” and “One Piece;” animator Jonathan Fontaine, who has worked on the Disney movie “Descendants;” and writer John M. Floyd, who has been featured in Alfred Hitchcock’s Mystery Magazine, among others.

Notably, book publisher HarperCollins is also partnering with the company to adapt titles like Madeleine Roux’s horror novel series Asylum and Joelle Charbonneau’s new fantasy series Dividing Eden. Additionally, TikTok star and independent animator King Science (Science Akbar) is teaming up to create an exclusive story on the app.

There are currently more than 100 creators working with Storiaverse and more than 100 stories in development.

Co-founders Agnes Kozera and David Kierzkowski. Image Credits: Storiaverse

Storiaverse launches at a time when many creators are panicking about the future of TikTok, the ByteDance-owned short-form video app where many storytellers have built a sizable audience (like King Science and his 13 million followers) and use the platform to show off their work.

Like TikTok and YouTube Shorts, Storiaverse offers an additional revenue stream for creators.

“There is a vast community of independent writers who often struggle for recognition and compensation. We believe their content can be invigorated in a more modern format to reach new readers,” Kozera said, adding that Storiaverse compensates both writers and animators for their contributions to the app. “The [compensation] fee varies based on factors such as length and complexity of the story,” she explained.

The company may also take other pages out of its competitors’ playbooks by bringing in ads, merchandise and subscriptions. Another idea on the table is adding product placement to videos, Kozera told us.

Storiaverse says it has already received thousands of submissions from writers. Creators can apply on Storiaverse’s website. When writers are accepted, they’re connected with an animator who helps bring the words to life.

The company is also building a Creator Suite for creators to collaborate with each other, access story performance insights and explore “more monetization opportunities,” Kozera said.

Storiaverse has raised $2.5 million in pre-seed funding led by 500 Global.




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber