From Digital Age to Nano Age. WorldWide.

Tag: claims

Robotic Automations

Paul Graham claims Sam Altman wasn't fired from Y Combinator | TechCrunch


In a series of posts on X on Thursday, Paul Graham, the co-founder of startup accelerator Y Combinator, brushed off claims that OpenAI CEO Sam Altman was pressured to resign as president of Y Combinator in 2019 due to potential conflicts of interest. “People have been claiming [Y Combinator] fired Sam Altman,” Graham writes. “That’s […]

© 2024 TechCrunch. All rights reserved. For personal use only.


Software Development in Sri Lanka

Robotic Automations

Fisker stiffed the engineering firm developing its low-cost EV and pickup truck, lawsuit claims | TechCrunch


Henrik Fisker stood on a stage last August and proudly debuted two prototypes designed to catapult his eponymous EV startup Fisker into the mainstream. There was the Pear, a low-cost EV meant for the masses, and the Alaska, Fisker’s entry into the red-hot pickup truck market.

In the weeks that followed, Fisker stopped paying the engineering firm that helped develop those vehicles, according to a previously-unreported lawsuit filed in federal court this week. The firm, a U.S. subsidiary of German engineering giant Bertrandt AG, also accuses Fisker of wrongfully holding onto IP associated with those vehicles. It’s asking for around $13 million in damages.

The lawsuit adds to a pile of legal trouble facing Fisker, which is on the brink of bankruptcy. At least 30 lawsuits alleging lemon law violations have been filed, a handful of which Fisker has already settled. A former director has filed a proposed class action claiming unpaid wages. A textile supplier has also sued Fisker for more than $1 million that it alleges the EV startup never paid.

The engineering lawsuit stands out amid the legal trouble because it suggests that financial cracks were already forming inside Fisker last August despite the bold claims its CEO made on that stage.

“The lawsuit filed by Bertrandt is without merit,” Matthew DeBord, Fisker’s vice president of communications, said in an email to TechCrunch. “It is a legally baseless and disappointing attempt by what has been a valued partner to extract from Fisker payments and intellectual property to which Bertrandt has no right to under the relevant agreements or otherwise.” He declined to comment on the other cases.

Bertrandt says in the complaint filed in Michigan Eastern District Court that it entered into a “design and development agreement” with Fisker in May 2022 to perform “engineering, design, and development services” on the Pear – a contract worth north of $35 million, according to a copy of the design and development agreement attached to the lawsuit. (The agreement also shows that Fisker had previously hired Bertrandt to perform a feasibility study, cost analysis, timing proposal and other items for the Pear EV.)

At some point after entering into the agreement, Bertrandt says Fisker asked it to do similar work in connection with the Alaska pickup truck. Bertrandt says in the complaint that a formal written agreement was never executed with Fisker for the Alaska, but that it provided a quote of $1.66 million that Fisker agreed to pay.

Fisker stopped paying Bertrandt at the end of August 2023, according to the complaint. The company continued to fail to pay invoices through January 31, 2024, bringing the total unpaid to $7,061,443. The engineering firm also claims that Fisker’s decision to put the development work on the Pear and Alaska EVs on “pause” is an additional breach of the contract as it caused Bertrandt to suffer delay costs.

Bertrandt says it had a meeting with Fisker on February 6, 2024 where the EV startup “acknowledged its liability for payment of these invoices and agreed to promptly pay $3,685,000 as a partial payment” – but then never made that payment.

Breaching the contract, according to Bertrandt, has cost the engineering firm an additional $5,858,000 in “lost profits, delay costs, and incidental damages,” which is why it’s seeking $12,919,443 in total damages.

What’s more, the firm says it demanded on April 22 that Fisker “return all of Bertrandt’s intellectual property” and “certify in writing that Fisker had not retained any hard copies or electronic copies,” and claims the EV startup has “failed to do either.”

“Fisker has been unjustly enriched at the expense of Bertrandt,” lawyers for the firm write in the complaint.

Bertrandt isn’t the only supplier to sue Fisker so far.

Georgia-based Corinthian Textiles sued Fisker in Los Angeles Superior Court in early April. The supplier claims it entered into an agreement with the EV startup in early 2023 to provide it with “customized products for use in Fisker’s automobiles.” It doesn’t specify what products it made for Fisker, but the company’s website says its automotive division specializes in floor, trunk and cargo mats, as well as “automotive carpet.”

Corinthian says Fisker “refused, and continue[s] to refuse” to pay invoices and other fees in the amount of $1,077,571.75.

Working overtime

Days before Bertrandt sued in federal court, Robert Lee, an employee who worked for Fisker from October 2023 to March 5, 2024, filed a proposed class action in Los Angeles Superior Court alleging a pattern of overworking employees and not properly compensating them. The suit also claims Fisker failed to reimburse expenses and pay out wages owed when employees separated from the company.

Lee claims that he and other hourly employees worked “well over” eight hours a day and 40 hours per week, and instead often worked over 12 hours per day. He claims they were “frequently compelled” to work weekends. Fisker did not compensate employees for that additional time, according to the complaint. Lee also claims Fisker failed to properly track hours worked, and even deducted commissions from their hourly pay.

He claims employees were “regularly compelled to work off the clock and [Fisker Inc] created a policy to account for less hours than the total amount of hours actually worked” in order to “meet certain goals, to generate more sales.”

Lee also claims Fisker “effectively coerced and pressured its non-exempt employees to work of-the-clock, have their wages deducted, have their wages miscalculated, to shorten (tantamount to a missed meal period) or forego meal and rest periods (or not be paid for their rest breaks).”

Lemons

Fisker started getting peppered with lawsuits in California alleging that it was violating the state’s lemon law as early as last November, which TechCrunch previously reported. The company has started to settle some of those earlier lawsuits in what roughly amounts to buying back the vehicles, according to court filings and a person familiar with the settlements.

More lemon law lawsuits have continued to pour in across state, where Fisker has delivered the bulk of its cars in the United States.

Customers may have taken action in other states where Fisker has delivered cars, like New York, Florida and Massachusetts. Those states require that lemon law disputes run through arbitration, making it difficult to know just how many actions may be pending against the company.

In its recent annual filing for 2023, Fisker noted that it is still defending against a proposed class action lawsuit from shareholders alleging violations of securities laws. Fisker then goes on to vaguely say that “[v]arious other legal actions, claims, and proceedings are pending against the Company, including, but not limited to, matters arising out of alleged product defects; employment-related matters; product warranties; and consumer protection laws.”

It also implied that it has been contacted by unnamed government agencies for information about its business, including subpoenas, in a new line of text that it had never included in any of its prior SEC filings.

“The Company also from time to time receives subpoenas and other inquiries or requests for information from agencies or other representatives of U.S. federal, state, and foreign governments,” the company wrote. DeBord, the communications VP, told TechCrunch that Fisker “currently [has] no pending subpoenas from governments.”

Correction: The article incorrectly identified Robert Lee as Fisker’s former director of technical services. The Lee who filed the lawsuit is an employee who worked for Fisker from October 2023 to March 5, 2024. The article has been corrected. 


Software Development in Sri Lanka

Robotic Automations

Razer hit with $1.1M FTC fine over glowing ‘N95’ mask COVID claims | TechCrunch


The Federal Trade Commission hit Razer with a $1.1 million fine Tuesday. The order claims that the gaming accessory maker misled consumers by claiming that its flashy Zephyr mask was certified as N95-grade.

“These businesses falsely claimed, in the midst of a global pandemic, that their face mask was the equivalent of an N95 certified respirator,” FTC Bureau of Consumer Projection Director Samuel Levine noted in a statement. “The FTC will continue to hold accountable businesses that use false and unsubstantiated claims to target consumers who are making decisions about their health and safety.”

Razer has predictably pushed back against the commission’s claims.

“We disagree with the FTC’s allegations and did not admit to any wrongdoing as part of the settlement,” a representative from the company said in a statement to TechCrunch. “It was never our intention to mislead anyone, and we chose to settle this matter to avoid the distraction and disruption of litigation and continue our focus on creating great products for gamers. Razer cares deeply about our community and is always looking to deliver technology in new and relevant ways.”

The company went on to suggest that the complaint was cherrypicked, adding that it went out of its way to refund customers and end sales of the Zephyr.

“The Razer Zephyr was conceived to offer a different and innovative face covering option for the community,” it notes. “The FTC’s claims against Razer concerned limited portions of some of the statements relating to the Zephyr. More than two years ago, Razer proactively notified customers that the Zephyr was not a N95 mask, stopped sales, and refunded customers.”

The FTC is also officially barring sales of the mask and “making COVID-related health misrepresentations or unsubstantiated health claims about protective health equipment.” It goes a step further, “prohibit[ing] the defendants from representing the health benefits, performance, efficacy, safety, or side effects of protective goods and services (as defined in the proposed order), unless they have competent and reliable scientific evidence to support the claims made.”

The filing suggests that Razer intentionally deceived consumers into believing that the $100 mask would protect against COVID. Certainly the virus was very much top of mind when the product first dropped in October 2021.

The order is currently awaiting approval and signature from a District Court judge.


Software Development in Sri Lanka

Robotic Automations

Breaking down TikTok's legal arguments around free speech, national security claims | TechCrunch


Social media platform TikTok says that a bill banning the app in the U.S. is “unconstitutional” and that it will fight this latest attempt to restrict its use in court.

The bill in question, which President Joe Biden signed Wednesday, gives Chinese parent company ByteDance nine months to divest TikTok or face a ban on app stores to distribute the app in the U.S. The law received strong bipartisan support in the House and a majority Senate vote Tuesday, and is part of broader legislation including military aid for Israel and Ukraine.

“Make no mistake. This is a ban. A ban on TikTok and a ban on you and YOUR voice,” said TikTok CEO Shou Chew in a video posted on the app and other social media platforms. “Politicians may say otherwise, but don’t get confused. Many who sponsored the bill admit that a TikTok ban is their ultimate goal…It’s actually ironic because the freedom of expression on TikTok reflects the same American values that make the United States a beacon of freedom. TikTok gives everyday Americans a powerful way to be seen and heard, and that’s why so many people have made TikTok a part of their daily lives,” he added.

This isn’t the first time the U.S. government has attempted to ban TikTok, something several other countries have already implemented.

TikTok is based in Los Angeles and Singapore, but it’s owned by Chinese technology giant ByteDance. U.S. officials have warned that the app could be leveraged to further the interests of an “entity of concern.”

In 2020, former President Donald Trump issued an executive order to ban TikTok’s operations in the country, including a deadline for ByteDance to divest its U.S. operations. Trump also tried to ban new downloads of TikTok in the U.S. and barred transactions with ByteDance after a specific date.

Federal judges issued preliminary injunctions to temporarily block Trump’s ban while legal challenges proceeded, citing concerns about violation of First Amendment rights and lack of sufficient evidence demonstrating that TikTok posted a national security threat.

After Trump left office, Biden’s administration picked up the anti-TikTok baton. Today, the same core fundamentals are at stake. So why do Congress and the White House think the outcome will be different?

TikTok has not responded to TechCrunch’s inquiry as to whether it has filed a challenge in a district court, but we know it will because both Chew and the company have said so.

When the company makes it in front of a judge, what are its chances of success?

TikTok’s ‘unconstitutional’ argument against a ban

“In light of the fact that the Trump administration’s attempt in 2020 to force ByteDance to sell TikTok or face a ban was challenged on First Amendment grounds and was rejected as an impermissible ‘indirect regulation of informational materials and personal communications,’ coupled with last December’s federal court order enjoining enforcement of Montana’s law that sought to impose a statewide TikTok ban as a ‘likely’ First Amendment violation, I believe this latest legislation suffers from the same fundamental infirmity,” Douglas E. Mirell, partner at Greenberg Glusker, told TechCrunch.

In other words, both TikTok as a corporation and its users have First Amendment rights, which a ban threatens.

In May 2023, Montana Governor Greg Gianforte signed into law a bill that would ban TikTok in the state, saying it would protect Montanans’ personal and private data from the Chinese Communist Party. TikTok then sued the state over the law, arguing that it violated the Constitution and the state was overstepping by legislating matters of national security. The case is still ongoing, and the ban has been blocked while the lawsuit progresses.

Five TikTok creators separately sued Montana arguing the ban violated their First Amendment rights and won. This ruling thus blocked the Montana law from going into effect and essentially stopped the ban. A U.S. federal judge claimed the ban was an overstep of state power and also unconstitutional, likely a violation of the First Amendment. That ruling has set a precedent for future cases.

TikTok’s challenge to this latest federal bill will likely point to that court ruling, as well as the injunctions to Trump’s executive orders, as precedent for why this ban should be reversed.

TikTok may also argue that a ban would affect small and medium-sized businesses that use the platform to make a living. Earlier this month, TikTok released an economic impact report that claims the platform generated $14.7 billion for small- to mid-sized businesses last year, in anticipation of a ban and the need for arguments against it.

The threat to ‘national security’

Mirell says courts do give deference to the government’s claims about entities being a national security threat.

However, the Pentagon Papers case from 1971, in which the Supreme Court upheld the right to publish a classified Department of Defense study of the Vietnam War, establishes an exceptionally high bar for overcoming free speech and press protections.

“In this case, Congress’ failure to identify a specific national security threat posed by TikTok only compounds the difficulty of establishing a substantial, much less compelling, governmental interest in any potential ban,” said Mirell.

However, there is some cause for concern that the firewall between TikTok in the U.S. and its parent company in China isn’t as strong as it appears.

In June 2022, a report from BuzzFeed News found that U.S. data had been repeatedly accessed by staff in China, citing recordings from 80 TikTok internal meetings. There have also been reports in the past of Beijing-based teams ordering TikTok’s U.S. employees to restrict videos on its platform or that TikTok has told its moderators to censor videos that mentioned things like Tiananmen Square, Tibetan independence or banned religious group, Falun Gong.

In 2020, there were also reports that TikTok moderators were told to censor political speech and suppress posts from “undesirable users” – the unattractive, poor, and disabled — which shows the company is not afraid to manipulate the algorithm for its own purposes.

TikTok has largely brushed off such accusations, but following BuzzFeed’s reporting, the company said it would move all U.S. traffic to Oracle’s infrastructure cloud service to keep U.S. user data private. That agreement, part of a larger operation called “Project Texas,” is focused on furthering the separation of TikTok’s U.S. operations from China and employing an outside firm to oversee its algorithms. In its statements responding to Biden’s signing of the TikTok ban, the company has pointed to the billions of dollars invested to secure user data and keep the platform free from outside manipulation as a result of Project Texas and other efforts.

Yaqui Wang, China research director at political advocacy group Freedom House, believes the data privacy issue is real.

“There’s a structural issue that a lot of people who don’t work on China don’t understand, which is that by virtue of being a Chinese company – any Chinese company whether you’re public or private – you have to answer to the Chinese government,” Wang told TechCrunch, citing the Chinese government’s record for leveraging private companies for political purposes. “The political system dictates that. So [the data privacy issue] is one concern.”

“The other is the possibility of the Chinese government to push propaganda or suppress content that it doesn’t like and basically manipulate the content seen by Americans,” she continued.

Wang said there isn’t enough systemic information at present to prove the Chinese government has done this in regards to U.S. politics, but the threat is still there.

“Chinese companies are beholden to the Chinese government which absolutely has an agenda to undermine freedom around the world,” said Wang. She noted that while China doesn’t appear to have a specific agenda to suppress content or push propaganda in the U.S. today, tensions between the two countries continue to rise. If a future conflict comes to a head, China could “really leverage TikTok in a way they’re not doing now.”

Of course, American companies have been at the center of attempts by foreign entities to undermine democratic processes, as well. One need look no further than the Cambridge Analytica scandal and Russia’s use of Facebook political ads to influence the 2016 presidential election, as a high-profile example.

That’s why Wang says more important than a ban on TikTok is comprehensive data privacy law that protects user data from being exploited and breached by all companies.

“I mean if China wants Facebook data today, it can just purchase it on the market,” Wang points out.

TikTok’s chances in court are unclear

The government has a hard case to prove, and it’s not a sure decision one way or the other. If the precedent set by past court rulings is applied in TikTok’s future case, then the company has nothing to worry about. After all, as Mirell has speculated, the TikTok ban appears to have been added as a sweetener needed to pass a larger bill that would approve aid for Israel and Ukraine. However, the current administration might also have simply disagreed with how the courts have decided to limit TikTok in the past, and want to challenge that.

“When this case goes to court, the Government (i.e., the Department of Justice) will ultimately have to prove that TikTok poses an imminent threat to the nation’s national security and that there are no other viable alternatives for protecting that national security interest short of the divestment/ban called for in this legislation,” Mirell told TechCrunch in a follow-up email.

“For its part, TikTok will assert that its own (and perhaps its users’) First Amendment rights are at stake, will challenge all claims that the platform poses any national security risk, and will argue that the efforts already undertaken by both the Government (e.g., through its ban upon the use of TikTok on all federal government devices) and by TikTok itself (e.g., through its ‘Project Texas’ initiative) have effectively mitigated any meaningful national security threat,” he explained.

In December 2022, Biden signed a bill prohibiting TikTok from being used on federal government devices. Congress has also been considering a bill called the Restrict Act that gives the federal government more authority to address risks posed by foreign-owned technology platforms.

“If Congress didn’t think that [Project Texas] was sufficient, they could draft and consider legislation to enhance that protection,” said Mirell. “There are plenty of ways to deal with data security and potential influence issues well short of divestment, much less a ban.”




Software Development in Sri Lanka

Robotic Automations

Adobe claims its new image generation model is its best yet | TechCrunch


Firefly, Adobe’s family of generative AI models, doesn’t have the best reputation among creatives.

The Firefly image generation model in particular has been derided as underwhelming and flawed compared to Midjourney, OpenAI’s DALL-E 3, and other rivals, with a tendency to distort limbs and landscapes and miss the nuances in prompts. But Adobe is trying to right the ship with its third-generation model, Firefly Image 3, releasing this week during the company’s Max London conference.

The model, now available in Photoshop (beta) and Adobe’s Firefly web app, produces more “realistic” imagery than its predecessor (Image 2) and its predecessor’s predecessor (Image 1) thanks to an ability to understand longer, more complex prompts and scenes as well as improved lighting and text generation capabilities. It should more accurately render things like typography, iconography, raster images and line art, says Adobe, and is “significantly” more adept at depicting dense crowds and people with “detailed features” and “a variety of moods and expressions.”

For what it’s worth, in my brief unscientific testing, Image 3 does appear to be a step up from Image 2.

I wasn’t able to try Image 3 myself. But Adobe PR sent a few outputs and prompts from the model, and I managed to run those same prompts through Image 2 on the web to get samples to compare the Image 3 outputs with. (Keep in mind that the Image 3 outputs could’ve been cherry-picked.)

Notice the lighting in this headshot from Image 3 compared to the one below it, from Image 2:

From Image 3. Prompt: “Studio portrait of young woman.”

Same prompt as above, from Image 2.

The Image 3 output looks more detailed and lifelike to my eyes, with shadowing and contrast that’s largely absent from the Image 2 sample.

Here’s a set of images showing Image 3’s scene understanding at play:

From Image 3. Prompt: “An artist in her studio sitting at desk looking pensive with tons of paintings and ethereal.”

Same prompt as above. From Image 2.

Note the Image 2 sample is fairly basic compared to the output from Image 3 in terms of the level of detail — and overall expressiveness. There’s wonkiness going on with the subject in the Image 3 sample’s shirt (around the waist area), but the pose is more complex than the subject’s from Image 2. (And Image 2’s clothes are also a bit off.)

Some of Image 3’s improvements can no doubt be traced to a larger and more diverse training data set.

Like Image 2 and Image 1, Image 3 is trained on uploads to Adobe Stock, Adobe’s royalty-free media library, along with licensed and public domain content for which the copyright has expired. Adobe Stock grows all the time, and consequently so too does the available training data set.

In an effort to ward off lawsuits and position itself as a more “ethical” alternative to generative AI vendors who train on images indiscriminately (e.g. OpenAI, Midjourney), Adobe has a program to pay Adobe Stock contributors to the training data set. (We’ll note that the terms of the program are rather opaque, though.) Controversially, Adobe also trains Firefly models on AI-generated images, which some consider a form of data laundering.

Recent Bloomberg reporting revealed AI-generated images in Adobe Stock aren’t excluded from Firefly image-generating models’ training data, a troubling prospect considering those images might contain regurgitated copyrighted material. Adobe has defended the practice, claiming that AI-generated images make up only a small portion of its training data and go through a moderation process to ensure they don’t depict trademarks or recognizable characters or reference artists’ names.

Of course, neither diverse, more “ethically” sourced training data nor content filters and other safeguards guarantee a perfectly flaw-free experience — see users generating people flipping the bird with Image 2. The real test of Image 3 will come once the community gets its hands on it.

New AI-powered features

Image 3 powers several new features in Photoshop beyond enhanced text-to-image.

A new “style engine” in Image 3, along with a new auto-stylization toggle, allows the model to generate a wider array of colors, backgrounds and subject poses. They feed into Reference Image, an option that lets users condition the model on an image whose colors or tone they want their future generated content to align with.

Three new generative tools — Generate Background, Generate Similar and Enhance Detail — leverage Image 3 to perform precision edits on images. The (self-descriptive) Generate Background replaces a background with a generated one that blends into the existing image, while Generate Similar offers variations on a selected portion of a photo (a person or an object, for example). As for Enhance Detail, it “fine-tunes” images to improve sharpness and clarity.

If these features sound familiar, that’s because they’ve been in beta in the Firefly web app for at least a month (and Midjourney for much longer than that). This marks their Photoshop debut — in beta.

Speaking of the web app, Adobe isn’t neglecting this alternate route to its AI tools.

To coincide with the release of Image 3, the Firefly web app is getting Structure Reference and Style Reference, which Adobe’s pitching as new ways to “advance creative control.” (Both were announced in March, but they’re now becoming widely available.) With Structure Reference, users can generate new images that match the “structure” of a reference image — say, a head-on view of a race car. Style Reference is essentially style transfer by another name, preserving the content of an image (e.g. elephants in the African Safari) while mimicking the style (e.g. pencil sketch) of a target image.

Here’s Structure Reference in action:

Original image.

Transformed with Structure Reference.

And Style Reference:

Original image.

Transformed with Style Reference.

I asked Adobe if, with all the upgrades, Firefly image generation pricing would change. Currently, the cheapest Firefly premium plan is $4.99 per month — undercutting competition like Midjourney ($10 per month) and OpenAI (which gates DALL-E 3 behind a $20-per-month ChatGPT Plus subscription).

Adobe said that its current tiers will remain in place for now, along with its generative credit system. It also said that its indemnity policy, which states Adobe will pay copyright claims related to works generated in Firefly, won’t be changing either, nor will its approach to watermarking AI-generated content. Content Credentials — metadata to identify AI-generated media — will continue to be automatically attached to all Firefly image generations on the web and in Photoshop, whether generated from scratch or partially edited using generative features.




Software Development in Sri Lanka

Robotic Automations

CesiumAstro claims former exec spilled trade secrets to upstart competitor AnySignal | TechCrunch


CesiumAstro alleges in a newly filed lawsuit that a former executive disclosed trade secrets and confidential information about sensitive tech, investors, and customers to a competing startup.

Austin-based Cesium develops active phased array and software-defined radio systems for spacecraft, missiles, and drones. While phased array antenna systems have been used on satellites for decades, Cesium has considerably advanced and productized the tech over its seven years in operation. The startup has landed over $100 million in venture and government funding, which it has used to develop a suite of products for commercial and defense customers.

The technology is niche: only a handful of companies work at the cutting edge of space-based radio technology, and Cesium no doubt pays close attention to any new entrant in this field. AnySignal, a startup that came out of stealth last October but was formally incorporated in 2022, certainly caught the company’s eye, not least because it edged out Cesium in a sales bid to a major customer and by attempting to solicit the interest of one of Cesium’s early investors — both facts stated in the lawsuit.

According to the suit, filed on March 25, these facts are directly related to former VP of Product Erik Luther’s misappropriation of trade secrets and confidential information on investors and customers, which Cesium alleges he subsequently disclosed to AnySignal. Notably, Luther did not leave Cesium to work for AnySignal, instead taking a role as head of marketing at a company that operates in a different sector entirely. But the suit says that Luther maintained “personal connections” with AnySignal’s cofounders, having worked with AnySignal CEO John Malsbury previously at a different company.

This resulted in AnySignal “recruiting and inducing Luther … to improperly disclose” the confidential and trade secret information, the suit says. AnySignal’s CEO and CesiumAstro did not respond to TechCrunch’s request for comment; a lawyer representing Luther referred TechCrunch to the March 29 legal filings cited below.

Cesium is clear on its position in the lawsuit: it does not believe that AnySignal could have developed its complex radio technology on its timeline and with its existing resources — “absent CesiumAstro’s technical diagrams and specifications (to which Luther had access).”

“With only a few employees and $5 million dollars in investor funding, [AnySignal] would not even be in the same orbit as CesiumAstro, which has spent tens of millions of dollars working with (now) 170 employees for seven years to develop its technologies,” the suit says. “But with Luther’s help, AnySignal has launched to directly compete with CesiumAstro in the specialized space for software-defined radios.”

Luther strongly denied all the allegations in two separate documents filed with the court on March 29; regarding the claim that he worked in concert with AnySignal, he says the allegation is “not only false…but invented out of whole cloth.” (The response also denies Cesium’s claim that it is an “industry leader.”)

Cesium “does not cite any facts or evidence whatsoever linking Luther and any of AnySignal’s business efforts and the alleged evidence that [Cesium] does cite do not support [its] contentions,” Luther’s lawyer claims in the filing. He goes on to say that Cesium takes a “Grand Canyon-sized leap from the paltry, easily explainable evidence it cites to the remarkable allegation that Luther has been secretly assisting AnySignal and feeding them [Cesium’s] trade secrets without citing any evidence whatsoever.”

El Segundo-based AnySignal was founded in May 2022 by Malsbury and COO Jeffrey Osborne, and emerged from stealth touting $5 million in seed funding last year. The company is developing a software-defined radio platform; Cesium’s lawsuit names it as a “direct competitor.” In February, a month before the suit was filed, AnySignal announced it had landed a partnership with private space station developer Vast for an advanced communication system for Vast’s flagship station, Haven-1.

The suit was filed in Western District of Texas under no. 1:24-cv-314.


Software Development in Sri Lanka

Robotic Automations

Meta releases Llama 3, claims it's among the best open models available | TechCrunch


Meta has released the latest entry in its Llama series of open source generative AI models: Llama 3. Or, more accurately, the company has open sourced two models in its new Llama 3 family, with the rest to come at an unspecified future date.

Meta describes the new models — Llama 3 8B, which contains 8 billion parameters, and Llama 3 70B, which contains 70 billion parameters — as a “major leap” compared to the previous-gen Llama models, Llama 2 8B and Llama 2 70B, performance-wise. (Parameters essentially define the skill of an AI model on a problem, like analyzing and generating text; higher-parameter-count models are, generally speaking, more capable than lower-parameter-count models.) In fact, Meta says that, for their respective parameter counts, Llama 3 8B and Llama 3 70B — trained on two custom-built 24,000 GPU clusters — are are among the best-performing generative AI models available today.

That’s quite a claim to make. So how is Meta supporting it? Well, the company points to the Llama 3 models’ scores on popular AI benchmarks like MMLU (which attempts to measure knowledge), ARC (which attempts to measure skill acquisition) and DROP (which tests a model’s reasoning over chunks of text). As we’ve written about before, the usefulness — and validity — of these benchmarks is up for debate. But for better or worse, they remain one of the few standardized ways by which AI players like Meta evaluate their models.

Llama 3 8B bests other open source models like Mistral’s Mistral 7B and Google’s Gemma 7B, both of which contain 7 billion parameters, on at least nine benchmarks: MMLU, ARC, DROP, GPQA (a set of biology-, physics- and chemistry-related questions), HumanEval (a code generation test), GSM-8K (math word problems), MATH (another mathematics benchmark), AGIEval (a problem-solving test set) and BIG-Bench Hard (a commonsense reasoning evaluation).

Now, Mistral 7B and Gemma 7B aren’t exactly on the bleeding edge (Mistral 7B was released last September), and in a few of benchmarks Meta cites, Llama 3 8B scores only a few percentage points higher than either. But Meta also makes the claim that the larger-parameter-count Llama 3 model, Llama 3 70B, is competitive with flagship generative AI models including Gemini 1.5 Pro, the latest in Google’s Gemini series.

Image Credits: Meta

Llama 3 70B beats Gemini 1.5 Pro on MMLU, HumanEval and GSM-8K, and — while it doesn’t rival Anthropic’s most performant model, Claude 3 Opus — Llama 3 70B scores better than the weakest model in the Claude 3 series, Claude 3 Sonnet, on five benchmarks (MMLU, GPQA, HumanEval, GSM-8K and MATH).

Image Credits: Meta

For what it’s worth, Meta also developed its own test set covering use cases ranging from coding and creating writing to reasoning to summarization, and — surprise! — Llama 3 70B came out on top against Mistral’s Mistral Medium model, OpenAI’s GPT-3.5 and Claude Sonnet. Meta says that it gated its modeling teams from accessing the set to maintain objectivity, but obviously — given that Meta itself devised the test — the results have to be taken with a grain of salt.

Image Credits: Meta

More qualitatively, Meta says that users of the new Llama models should expect more “steerability,” a lower likelihood to refuse to answer questions, and higher accuracy on trivia questions, questions pertaining to history and STEM fields such as engineering and science and general coding recommendations. That’s in part thanks to a much larger data set: a collection of 15 trillion tokens, or a mind-boggling ~750,000,000,000 words — seven times the size of the Llama 2 training set. (In the AI field, “tokens” refers to subdivided bits of raw data, like the syllables “fan,” “tas” and “tic” in the word “fantastic.”)

Where did this data come from? Good question. Meta wouldn’t say, revealing only that it drew from “publicly available sources,” included four times more code than in the Llama 2 training data set, and that 5% of that set has non-English data (in ~30 languages) to improve performance on languages other than English. Meta also said it used synthetic data — i.e. AI-generated data — to create longer documents for the Llama 3 models to train on, a somewhat controversial approach due to the potential performance drawbacks.

“While the models we’re releasing today are only fine tuned for English outputs, the increased data diversity helps the models better recognize nuances and patterns, and perform strongly across a variety of tasks,” Meta writes in a blog post shared with TechCrunch.

Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much. Recent reporting revealed that Meta, in its quest to maintain pace with AI rivals, at one point used copyrighted ebooks for AI training despite the company’s own lawyers’ warnings; Meta and OpenAI are the subject of an ongoing lawsuit brought by authors including comedian Sarah Silverman over the vendors’ alleged unauthorized use of copyrighted data for training.

So what about toxicity and bias, two other common problems with generative AI models (including Llama 2)? Does Llama 3 improve in those areas? Yes, claims Meta.

Meta says that it developed new data-filtering pipelines to boost the quality of its model training data, and that it’s updated its pair of generative AI safety suites, Llama Guard and CybersecEval, to attempt to prevent the misuse of and unwanted text generations from Llama 3 models and others. The company’s also releasing a new tool, Code Shield, designed to detect code from generative AI models that might introduce security vulnerabilities.

Filtering isn’t foolproof, though — and tools like Llama Guard, CybersecEval and Code Shield only go so far. (See: Llama 2’s tendency to make up answers to questions and leak private health and financial information.) We’ll have to wait and see how the Llama 3 models perform in the wild, inclusive of testing from academics on alternative benchmarks.

Meta says that the Llama 3 models — which are available for download now, and powering Meta’s Meta AI assistant on Facebook, Instagram, WhatsApp, Messenger and the web — will soon be hosted in managed form across a wide range of cloud platforms including AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM’s WatsonX, Microsoft Azure, Nvidia’s NIM and Snowflake. In the future, versions of the models optimized for hardware from AMD, AWS, Dell, Intel, Nvidia and Qualcomm will also be made available.

And more capable models are on the horizon.

Meta says that it’s currently training Llama 3 models over 400 billion parameters in size — models with the ability to “converse in multiple languages,” take more data in and understand images and other modalities as well as text, which would bring the Llama 3 series in line with open releases like Hugging Face’s Idefics2.

Image Credits: Meta

“Our goal in the near future is to make Llama 3 multilingual and multimodal, have longer context and continue to improve overall performance across core [large language model] capabilities such as reasoning and coding,” Meta writes in a blog post. “There’s a lot more to come.”

Indeed.


Software Development in Sri Lanka

Robotic Automations

Consumer Financial Protection Bureau fines BloomTech for false claims | TechCrunch


In an order today, the U.S. Consumer Financial Protection Bureau (CFPB) said that BloomTech, the for-profit coding bootcamp previously known as the Lambda School, deceived students about the cost of loans, made false claims about graduates’ hiring rates and engaged in illegal lending masked as “income sharing” agreements with high fees.

The order marks the end of the CFPB’s investigation into BloomTech’s practices — and the start of agency’s penalties on the organization.

The CFPB is permanently banning BloomTech from consumer lending activities and its CEO, Austen Allred, from student lending for a period of ten years. In addition, the agency is ordering BloomTech and Allred to cease collecting payments on loans for graduates who didn’t have a qualifying job and allow students to withdraw their funds without penalty — as well as eliminate finance changes for “certain agreements.”

“BloomTech and its CEO sought to drive students toward income share loans that were marketed as risk-free, but in fact carried significant finance charges and many of the same risks as other credit products,” CFPB director Rohit Chopra said in a statement. “Today’s action underscores our increased focus on investigating individual executives and, when appropriate, charging them with breaking the law.”

BloomTech and Allred must also pay the CFPB over $164,000 in civil penalties to be deposited in the agency’s victims relief fund, with BloomTech contributing ~$64,000 and Allred forking over the remainder ($100,000).

Allred founded BloomTech, which rebranded from the Lambda School in 2022 after cutting half its staff, in 2017. Based in San Francisco, the vocational organization — owned primarily by Allred — is backed by various VC funds and investors including Gigafund, Tandem Fund, Y Combinator, GV, GGV and Stripe, and at one time was valued at over $150 million.

Critics almost immediately attacked the firm’s then-pioneering business model — the income share agreement, or ISA — as predatory.

For BloomTech’s short-term, typically six-to-nine-month certification — not degree — programs in fields spanning web development, data science and backend engineering, the school originated income-share loans to fund students’ tuition. (According to the CFPB, BloomTech has originated “at least” 11,000 loans to date.) These loans require that recipients who earn more than $50,000 in a related industry pay BloomTech 17% of their pre-tax income each month until reaching the 24-payment or $30,000 total repayment threshold.

BloomTech didn’t market the loans as such, saying that they didn’t create debt and were “risk free,” and advertised a 71%-86% job placement rate. But the CFPB found these marketing claims and others to be flatly untrue.

BloomTech’s loans in fact carried an annual percentage rate and an average finance charge of around $4,000, neither of which students were made aware of, and a single missed payment triggered a default. The school’s job placement rates were closer to 50% and sank as low as 30%. And, unbeknownst to many students, BloomTech was selling a portion of its loans to investors while depriving recipients of rights they should’ve had under a federal protection known as the Holder Rule.

Prior to the CFPB order, BloomTech, which briefly landed in hot water with California’s oversight board several years ago for operating without approval, had faced other lawsuits claiming the school misrepresented how likely graduates were to get a job and how much they were likely to earn. Last year, leaked documents obtained by Business Insider raised questions about the company inflating its efficacy and hyping up a curriculum that didn’t upskill students at the level they expected.

To comply with the CFPB order, BloomTech must stop collecting payments on loans to graduates who didn’t receive a qualifying job in the past year, and eliminate the finance charge for those who graduated the program more than 18 months ago and obtained a qualifying job making $70,000 or less. The company must also allow current students to withdraw from the program and cancel their loans, or continue in the program with a third-party loan.




Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber