From Digital Age to Nano Age. WorldWide.

Tag: Security

Robotic Automations

Google lays off workers, Tesla cans its Supercharger team and UnitedHealthcare reveals security lapses | TechCrunch


Welcome, folks, to Week in Review (WiR), TechCrunch’s regular newsletter that recaps the week that was in tech. This edition’s a tad bittersweet for me — it’ll be my last (for a while, anyway). Soon, I’ll be shifting my attention to a new AI-focused newsletter, which I’m super thrilled about. Stay tuned!

Now, on with the news: This week Google laid off staff from its Flutter, Dart and Python teams weeks before its annual I/O developer conference. A total of 200 people were let go across Google’s “Core” teams, which included those working on app platforms and other engineering roles.

Elsewhere, Tesla CEO Elon Musk gutted the company’s team responsible for overseeing its Supercharger network in a new round of layoffs — despite recently winning over major automakers like Ford and General Motors. The cuts are so complete that Musk suggested in an email that they’ll force Tesla to slow the Supercharger network’s expansion.

And UnitedHealthcare’s CEO, Andrew Witty, told a House subcommittee that the ransomware gang that hacked U.S. health tech giant Change Healthcare — UnitedHealthcare’s subsidiary — used a set of stolen credentials to access Change Healthcare systems that weren’t protected by multifactor authentication. Last week, UnitedHealthcare said that the hackers stole health data on a “substantial proportion of people in America.”

Lots else happened. We recap it all in this edition of WiR — but first, a reminder to sign up to receive the WiR newsletter in your inbox every Saturday.

News

Hallucinations, hallucinations: OpenAI is facing another privacy complaint in the EU. This one — filed by privacy rights nonprofit noyb on behalf of an individual complainant — targets the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals.

Just walk out … of Sam’s Club: Sam’s Club customers who pay either at a register or through the Scan & Go mobile app can now walk out of the store without having their purchases double-checked. The technology, unveiled at the Consumer Electronics Show in January, has now been deployed at 20% of Sam’s Club locations.

TikTok circumvents Apple rules: TikTok is presenting some users with a link to a website for purchasing the coins used to tip digital creators on the platform. Typically, these coins must be bought via in-app purchase — which requires a 30% commission paid to Apple — suggesting TikTok might be attempting to skirt Apple’s App Store rules.

NIST’s GenAI platform: The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, has launched NIST GenAI, a new program to assess generative AI technologies, including text- and image-generating AI.

Getir pulls out: Getir, the quick commerce behemoth, has pulled out of the U.S., U.K. and Europe to focus on Turkey, its home country. The company — once valued close to $12 billion — said that the move would impact thousands of gig and full-time workers.

Analysis

Inside the Techstars “cold war”: Brilliant reporting by Dom peels back the curtains on a year of financial losses and employee cuts at startup accelerator Techstars, whose CEO, Maëlle Gavet, has been a controversial force for change.

AI-powered coding: Yours truly takes a look at Copilot Workspace, somewhat of an evolution of GitHub’s AI-powered coding assistant Copilot into a more general tool — building on recently introduced capabilities like Copilot Chat, which lets developers ask questions about code in natural language.

Autonomous car racing: Tim Stevens dives into the Abu Dhabi racing event that pitted a driverless car against a Formula 1 driver.


Software Development in Sri Lanka

Robotic Automations

Google expands passkey support to its Advanced Protection Program ahead of the US presidential election | TechCrunch


Ahead of the U.S. presidential election, Google is bringing passkey support to its Advanced Protection Program (APP), which is used by people who are at high risk of targeted attacks, such as campaign workers, candidates, journalists, human rights workers, and more.

APP traditionally required the use of hardware security keys, but soon users can enroll in APP with passkeys. Users will have the option to use passkeys alone, or alongside a password or hardware security key.

“In a critical election year, we’ll be bringing this feature to our users who need it most, and continue to work with experts like Defending Digital Campaigns, the International Foundation for Electoral Systems, Asia Centre, Internews, and Possible to help protect global high-risk users,” Google’s VP of Security Engineering, Heather Adkins, said in a blog post.

Google says passkeys have been used to authenticate users more than one billion times across over 400 million Google Accounts since the company launched passkey support in 2022. Google says passkeys are used on Google Accounts more often than legacy forms of two-step verification, such as SMS one-time passwords and app-based one-time passwords combined.

Passkey logins make it harder for bad actors to remotely access your accounts since they would also need physical access to a phone. Passkeys also remove the need to rely on username and password combinations, which can be susceptible to phishing.

The technology has been adopted by numerous other companies, including Apple, Amazon, X (formerly Twitter), PayPal, WhatsApp, GitHub and TikTok.

Google also announced that it’s expanding its Cross-Account Protection program, which shares security notifications about suspicious activity with the third-party apps you’ve connected to your Google account. The company says this helps prevent cybercriminals from gaining access to one of your accounts and using it to infiltrate others. Google notes that it’s protecting 2.4 billion accounts across 3.4 million apps and sites and that it’s growing its collaborations across the industry.


Software Development in Sri Lanka

Robotic Automations

Belgium's Aikido lands $17M Series A for its 'no BS' security platform aimed at developers | TechCrunch


Developers have a problem. It used to be the case that only large enterprises needed to worry themselves with security, but today, every startup is capable of holding huge amounts of customer data. That means developers across the board have to worry about how secure their platform is, and they often find themselves grappling with complicated tools to manage security.

Now, Aikido, a small startup in Ghent, Belgium, thinks it has an answer to that dilemma: A no-nonsense, open-source, developer-facing security platform. And the startup has just raised a $17 million Series A to further build out its product.

“There have been security tools for three decades, but I think we’re the first where the buyer is the user. With other tools, the CSO is the buyer, but then some poor developer is the user. We are the ‘no BS’ platform,” Aikido’s founder and CTO, Willem Delbare, told TechCrunch.

He has a point.

Aikido’s main competitors tend to make tools that are aimed at larger enterprises than the people who actually have to deploy the tools. Enterprise platform Snyk, for example, used to resemble Aikido, but pivoted to larger firms some time ago. Other competitors include JIT, which caters to small-to-mid market customers. In the middle market, you have Endor Labs, Guardrails, and then you have larger companies like Mend, Qwiet, Oxeye, Ox, Arnica and Apiiro .

Delbare told me that Aikido’s main differentiators are that it has a freemium model and it actively open-sources new products. “This makes us flexible, fast, and affordable,” he said.

The company also offers all-in-one security, flat pricing, and a lot less notifications. “We only bother developers when something ‘real’ is wrong. We aggressively triage alerts to cut noise and false-positives,” he said.

That logic seems to have worked fairly well: The company already has 3,000 small-to-midsize customers. And this Series A, led by European venture firm Singular, comes less than 6 months after the company raised a $5 million seed round. The company has now raised a total of $22.5 million.

Another aspect that sets Aikido apart is that it’s based in Ghent. The security industry is dominated by Israeli and U.S. incumbents, and their veterans (the security industry’s version of the ‘PayPal Mafia’ is called ‘the Checkpoint Mafia‘).

Delbare said there’s a certain “playbook” that U.S. or Israeli security startups follow: “They take a very technically advanced security feature, become really good at it, raise a ton of cash, and then two years later, get bought by Palo Alto Networks or Cisco. And then they just repeat that playbook over and over.”

He stressed that Aikido doesn’t follow that pattern. “We’re not doing that kind of playbook. We’re not one single feature. If we ever get bought, it will just be for our customer base and the revenue. Not for a platform that fixes a feature gap,” he said.

“These tools basically look like the inside of an F-16’s cockpit. They make you feel dumb. A developer just wants to fix problems and move on with building fun features, right?” Delbare explained.

Delbare said Aikido decided to go with Singular after meeting its partner, Henri Tilloy. “I think he’s the first VC I’ve talked to in a long time who actually understood the product. Most VCs look at your company and they just see a spreadsheet,” he said.

Also in the team are co-founders Roeland Delrue (CRO and COO), and Felix Garriau (CMO). The company has brought on Madeline Lawrence, who left her role as a partner at Peak VC to join the startup as its chief brand officer.

The round also saw participation from Notion Capital and Connect Ventures, both of which co-led the previous seed round.

Aikido is tackling a large market. The network security software market is expected to increase from $24.21 billion in 2023 to $27.33 billion in 2024.

At the same time, security risks are mutating and growing rapidly, with the average cost of a data breach reaching record highs of $4.45 million in 2023, according to Upguard.


Software Development in Sri Lanka

Robotic Automations

Citigroup's VC arm invests in API security startup Traceable | TechCrunch


In 2017, Jyoti Bansal co-founded San Francisco-based security company Traceable alongside Sanjay Nagaraj, a former investor. With Traceable, Bansal — who previously co-launched app performance management startup AppDynamics, acquired by Cisco in 2017 — sought to build a platform to protect customers’ APIs from cyberattacks.

Attacks on APIs — the sets of protocols that establish how platforms, apps and services communicate — are on the rise. API attacks affected nearly one quarter of organizations every week in the first month of 2024, a 20% increase from the same period a year ago, according to cybersecurity firm Check Point.

API attacks take many forms, including attempting to make an API unavailable by overwhelming it with traffic, bypassing authentication methods, and exposing sensitive data transferred via a vendor’s APIs.

“There’s a lack of recognition of the criticality of API security,” Bansal told TechCrunch in an interview, “as well as ignorance of the ever-growing attack surface in APIs and a resistance to embrace API security due to entrenched investments in security solutions that don’t address the API security problem directly.”

To Bansal’s point, more and more businesses are tapping APIs in part thanks to the generative AI boom, but in the process unwittingly exposing themselves to attacks. Per one recent study, the number of APIs used by companies increased by over 200% between July 2022 and July 2023. Gartner, meanwhile, predicts that more than 80% of enterprises will have used generative AI APIs or deployed generative AI-enabled apps by 2026.

What Traceable does to try to shield these APIs is applies AI to analyze usage data to learn normal API behavior and spot activity that deviates from the baseline. Traceable’s software, which runs on-premises or in a fully managed cloud, can discover and catalog existing and new APIs including undocumented and “orphaned” (i.e. deprecated) APIs in real time, according to Bansal.

Image Credits: Traceable

“In order to detect modern threat scenarios, Traceable trained in-house models by fine-tuning open source large language base models with labeled attack data,” Bansal explained. “Our platform provides tools for API discovery, testing, protection and threat hunting workflows for IT teams.”

The API security solutions market is quickly becoming crowded, with vendors such as Noname Security, 42Crunch, Vorlon, Salt Security, Cequence, Ghost Security, Pynt, Akamai, Escape and F5 all vying for customers. According to Research and Markets, the segment could grow at a compound annual growth rate of 31.5% from 2023 to 2030, buoyed by the increasing threats in cybersecurity and the demand for more secure APIs.

But Bansal claims that Traceable is holding its own, analyzing around 500 billion API calls a month for ~50 customers and projecting revenue to double this year. Most of Traceable’s clients are in the enterprise, but Bansal says the company’s investigating piloting with governments.

“Traceable is building a long-term sustainable company, which from a financial perspective means that we have a very healthy margin profile that continues to improve as our revenue grows,” he said. “We’re not profitable today by choice, as we’re investing into the business responsibly … Our focus is on strategic investments maximizing return, not simply spending.”

To that end, Traceable today announced that it raised $30 million in a strategic investment from a group of backers that included Citi Ventures (Citigroup’s corporate venture arm) IVP, Geodesic Capital, Sorenson Capital and Unusual Ventures. Valuing Traceable at $500 million post-money and bringing Traceable’s total raised to $110 million, the new cash will be put toward product development, scaling up Traceable’s platform and customer engineering teams and building out the company’s partnership program, Bansal said.

Traceable has ~180 staffers currently. Bansal expects headcount to reach 230 by year-end 2024, as the the bulk of the new investment goes to hiring.

“Traceable wasn’t fundraising, as we still had substantial cash runway prior to this investment,” Bansal said, adding that Traceable secured a “sizeable” line of credit in addition to the new funds, “but we received significant inbound demand from investors. With the combination of the strategic alignment with Citi Ventures and the attractive terms of the investment, we decided to take a smaller investment now to accelerate our product and go-to-market initiatives before thinking about a more substantial fundraise.”


Software Development in Sri Lanka

Robotic Automations

With $175M in new funding, Island is putting the browser at the center of enterprise security | TechCrunch


Island, the secure browser company, may be the most valuable startup that you have never heard of. The company, which is putting the browser at the center of security, announced a $175 million Series D investment on Tuesday at a whopping $3 billion valuation. Island has now raised a total of $487 million.

That’s a ton of money, and it makes us wonder: What is the company doing to warrant this kind of investment at this level of value? Doug Leone, a partner at Sequoia who invested in Island going back to the A round, says that he was attracted to the company’s founding team and unique value proposition.

“The two founders, one of whom was a technical founder out of Israel — Dan Amiga — and one who was a very senior security executive out of the U.S. — Mike Fey — had a vision that if you could produce a browser based on Chromium that looks like a standard browser to the consumer employee in a corporation, but was secure, it would stop bad guys from doing a whole bunch of things,” Leone told TechCrunch.

He says that the end result is that you can lower the overall cost of security by replacing things like a VPN, data loss prevention and mobile device management, all of which can be done right in the browser instead of purchasing separate tools. That could in turn lower the overall cost of securing a network.

Island is defining a category with an enterprise browser, while allowing employees to work in a familiar environment and keeping them more secure, says Ray Wang, founder and principal analyst at Constellation Research.

“They are using the security angle to change human computing interactions,” he said. “Think of the browser as your screen into a ‘Choose Your Own Adventure’ game, and based on all the data being captured, it can deliver contextually relevant content, actions and insight, but it does it while delivering on enterprise class security of the data, process and identity.”

Fey acknowledges that if he showed up at a company with a proprietary browser, and they have 20,000 apps — which would be possible in a Fortune 100 company — then they would have to test all those apps against that browser. But the fact that Island is based on the Chromium standard means that IT can trust the browser without having to put everything through a lengthy testing process. “The browser world standardized on Chromium. This idea couldn’t have come to fruition before that,” Fey said.

In spite of the value proposition and the standardized approach, Fey says it still takes some explaining to get executives to understand that by paying for a security-focused browser, they can actually save money in the long run. “You have to explain where the ROI comes from. What am I getting? Where’s it coming from? And the ROI has to be very understandable and very believable and large,” he said.

How large? Consider that he says one company saved $300 million a year shutting down racks in a data center because they didn’t require nearly the same level of resources anymore to run the same applications.

Fey says it’s not about replacing these tools, so much as the fact that taking advantage of a standardized browser just makes it so much easier to execute on things like web filtering or even virtual desktops. It sounds simple, but the company has 280 employees, of which 100 are engineers. He says a lot of engineering work went into making this happen.

While he wouldn’t discuss specific revenue numbers, the company has around 200 customers, and has been growing steadily over the past couple of years. Leone referred to it as exponential growth.

Fey thinks that Island can be a substantial public company eventually. “We’re getting into decent ARR at this point, meaningful ARR, and our margins are good,” he said. “So you know what we think is we will make a strong IPO candidate someday, but not next year. Someday.”


Software Development in Sri Lanka

Robotic Automations

Meta's approach to election security in the frame as EU probes Fb, Instagram | TechCrunch


The European Union announced Tuesday it suspects Meta’s social networking platforms, Facebook and Instagram, of breaking the bloc’ rules for larger platforms in relation to election integrity.

The Commission has opened the formal infringement proceedings to investigate Meta under the the Digital Services Act (DSA), an online governance and content moderation framework. Reminder: Penalties for confirmed breaches of the regime can include fines of up to 6% of global annual turnover.

The EU’s concerns here span several areas: Meta’s moderation of political ads — which it suspects is inadequate; Meta’s policies for moderating non-paid political content, which the EU suspects are opaque and overly restrictive, whereas the DSA demands platforms’ policies deliver transparency and accountability; and Meta’s policies that relate to enabling outsiders to monitor elections.

The EU’s proceeding also targets Meta’s processes for users to flag illegal content, which it’s concerned aren’t user friendly enough; and its internal complaints handling system for content moderation decisions, which it also suspects are ineffective.

“When Meta get paid for displaying advertising it doesn’t appear that they have put in place effective mechanism of content moderation,” said a Commission official briefing journalists on background on the factors that led it to open the bundle of investigations. “Including for advertisements that could be generated by a generative AI — such as, for example, deep fakes — and these have been exploited or appear to have be exploited by malicious actors for foreign interference.”

The EU is drawing on some independent research, itself enabled by another DSA requirement that large platforms publish a searchable ad archive, which it suggested has shown Meta’s ad platform being exploited by Russian influence campaigns targeting elections via paid ads. It also said it’s found evidence of a lack of effective ads moderation by Meta being generally exploited scammers — with the Commission pointing to a surge in financial scam ads on the platform.

On organic (non-paid) political content, the EU said Meta seems to limit the visibility of political content for users by default but does does not appear to provide sufficient explanation — either of how it identifies content as political nor how moderation is done. The Commission also said it had found evidence to suggest Meta is shadowbanning (aka limiting the visibility/reach) certain accounts with high volumes of political posting.

If confirmed, such actions would be a breach of the DSA as the regulation puts a legal obligation on platforms to transparently communicate the policies they apply to their users.

On election monitoring, the EU is particularly concerned about Meta’s recent decision to shutter access to CrowdTangle, a tool researchers have previously been able to use for real-time election monitoring.

It’s not opened an investigation on this yet but has sent Meta an urgent formal request for information (RFI) about its decision to deprecate the research tool — giving the company five days to respond. Briefing journalists about the development, Commission officials suggested they could take more action in this area, such as opening a formal investigation, depending on Meta’s response.

The short deadline for a response clearly conveys a sense of urgency. Last year, soon after the EU took up the baton overseeing larger platforms’ DSA compliance with a subset of transparency and risk mitigation rules, the Commission named election integrity as one of its priority areas for its enforcement of the regulation.

During today’s briefing, Commission officials pointed to the upcoming European elections in June — questioning the timing of Meta’s decision to deprecate CrowdTangle. “Our concern — and this is also why we consider this to be a particular urgent issue — is that just a few weeks ahead of the European election Meta has decided to deprecate this tool, which has allowed journalists… civil society actors and researchers in, for example, the 2020 US elections, to monitor election related risks.”

The Commission is worried another tool Meta has said will replace CrowdTangle does not have equivalent/superior capabilities. Notably the EU is concerned it will not let outsiders monitor election risks in real-time. Officials also raised concerns about slow onboarding for Meta’s new tool.

“At this point we’re requesting information from Meta on how they intend to remedy this the lack of real time election monitoring tool,” said one senior Commission official during the briefing. “We are also requesting some additional documents from them on the decision that has led them to deprecate Crowdtangle and their assessment on the capabilities of the new tool.”

Meta was contacted for comment about the Commission’s actions. In a statement a company spokesperson said: “We have a well-established process for identifying and mitigating risks on our platforms. We look forward to continuing our cooperation with the European Commission and providing them with further details of this work.”

These are the first formal DSA investigations Meta has faced — but not the first RFIs. Last year the EU sent Meta a flurry of requests for information — including in relation to the Israel-Hamas war, election security and child safety, among others.

In light of the variety of information requests on Meta platforms, the company could face additional DSA investigations as Commission enforcers work through multiple submissions.


Software Development in Sri Lanka

Robotic Automations

Despite complaints, Apple hasn't yet removed an obviously fake app pretending to be RockAuto | TechCrunch


Apple’s App Store isn’t always as trustworthy as the company claims. The latest example comes from RockAuto, an auto parts dealer popular with home mechanics and other DIYers, which is upset that a fake app masquerading as its official app has not been removed from the App Store, despite numerous complaints to Apple.

RockAuto co-founder and president Jim Taylor was first alerted to the situation when customers began complaining about “annoying ads” in its app — something he said “surprised us since we don’t have an app.”

Fake RockAuto app on the App Store. Image Credits: Apple (screen capture by TechCrunch)

“We discovered someone placed an app in the Apple App Store using our logo and company information — but with the misspellings and clumsy graphics typical of phishing schemes,” he told TechCrunch.

On closer inspection, the fake app doesn’t look very legit, but it’s easy to see how someone could be fooled. Its App Store images show a photo of a truck with the word “Heading” across the image as if a template was hastily used and the work was unfinished. In addition, despite being titled “RockAuto” on the App Store, the app refers to itself as “RackAuto” throughout its App Store description.

What’s more, it promises customers that “Your privacy is a top priority” and that “all your data is securely stored and encrypted, giving you peace of mind.” That’s not likely, given the nature of this app.

The issue is not only concerning because of the app’s ability to fool at least some portion of RockAuto’s customers but also because it undermines Apple’s messaging about how the App Store is a trusted and secure marketplace — which is why it demands a cut of developers’ in-app purchase transactions. The tech giant has been fighting back against regulations like the EU’s Digital Markets Act (DMA), by claiming these laws would compromise customer safety and privacy. Apple believes that customers will be at risk if they conduct business outside its App Store with unknown parties. But, as these cases show, bad actors can too easily infiltrate its own app marketplace as well.

Image Credits: Fake RockAuto app on the App Store. Image Credits: Apple (screen capture by TechCrunch)

Apple has so far ignored RockAuto’s requests to remove the fake app, which were all sent through proper channels, according to documentation the company shared with TechCrunch.

While searching for a solution to this problem, RockAuto came across our coverage of a similar situation with LastPass. The password manager was also the victim of a similar scheme when a fake app pretending to be LastPass was live on the App Store for weeks. LastPass eventually had to warn its customers publicly in a blog post, as Apple had not yet taken the fake app down until after the press coverage and LastPass’s own post went live.

Apple didn’t respond to requests for comment at the time. The company wasn’t immediately available for requests for comment about RockAuto’s complaint either.

Taylor says that RockAuto’s Customer Service manager initially reached out to Apple to resolve the situation. When he didn’t get a response, Taylor got involved.

“It’s mostly one-way since the only replies we’ve had from Apple are ‘you shouldn’t have emailed, go use the online form’ and ‘upload screen prints of the app store listing and your trademark registration,’” Taylor explains, both of which RockAuto had already done, its documentation indicates.

“Neither the uploaded documents nor the online form submissions produced any response at all,” Taylor noted, “not even the promised ‘case number in 24 hours’ despite multiple submissions,” he said.

Since filing the complaint on April 18, 2024, RockAuto has shared its trademark registration with Apple, emailed the company, called the number provided on Apple’s copyright infringement page, sent a DMCA Takedown request and filled out Apple’s required forms.

It has not received anything other than automated responses and the fake app remains live as of the time of publication.


Software Development in Sri Lanka

Robotic Automations

Despite complaints, Apple hasn't yet removed an obviously fake app pretending to be RockAuto | TechCrunch


Apple’s App Store isn’t always as trustworthy as the company claims. The latest example comes from  RockAuto, an auto parts dealer popular with home mechanics and other DIYers, which is upset that a fake app masquerading as its official app has not been removed from the App Store, despite numerous complaints to Apple.

RockAuto Co-Founder and President Jim Taylor was first alerted to the situation when customers began complaining about “annoying ads” in its app — something he said “surprised us since we don’t have an app.”

“We discovered someone placed an app in the Apple App Store using our logo and company information — but with the misspellings and clumsy graphics typical of phishing schemes,” he told TechCrunch.

On closer inspection, the fake app doesn’t look very legit, but it’s easy to see how someone could be fooled. Its App Store images show a photo of a truck with the word “Heading” across the image as if a template was hastily used and the work was unfinished. In addition, despite being titled “RockAuto” on the App Store, the app refers to itself as “RackAuto” throughout its App Store description.

What’s more, it promises customers that “Your privacy is a top priority” and that “all your data is securely stored and encrypted, giving you peace of mind.” That’s not likely, given the nature of this app.

The issue is not only concerning because of the app’s ability to fool at least some portion of RockAuto’s customers but also because it undermines Apple’s messaging about how the App Store is a trusted and secure marketplace — which is why it demands a cut of developers’ in-app purchase transactions. The tech giant has been fighting back against regulations like the EU’s Digital Markets Act (DMA), by claiming these laws would compromise customer safety and privacy. Apple believes that customers will be at risk if they conduct business outside its

App Store with unknown parties. But, as these cases show, bad actors can too easily infiltrate its own app marketplace as well.

Image Credits: Fake RockAuto app on the App Store

Apple has so far ignored RockAuto’s requests to remove the fake app, which were all sent through proper channels, according to documentation the company shared with TechCrunch.

While searching for a solution to this problem, RockAuto came across our coverage of a similar situation with LastPass. The password manager was also the victim of a similar scheme when a fake app pretending to be LastPass was live on the App Store for weeks. LastPass eventually had to warn its customers publicly in a blog post, as Apple had not yet taken the fake app down until after the press coverage and LastPass’s own post went live.

Apple didn’t respond to requests for comment at the time. The company wasn’t immediately available for requests for comment about RockAuto’s complaint either.

Taylor says that RockAuto’s Customer Service manager initially reached out to Apple to resolve the situation. When he didn’t get a response, Taylor got involved.

“It’s mostly one-way since the only replies we’ve had from Apple are ‘you shouldn’t have emailed, go use the online form’ and ‘upload screen prints of the app store listing and your trademark registration,’” Taylor explains, both of which RockAuto had already done, its documentation indicates.

“Neither the uploaded documents nor the online form submissions produced any response at all,” Taylor noted, “not even the promised ‘case number in 24 hours’ despite multiple submissions,” he said.

Since filing the complaint on April 18, 2024, RockAuto has shared its trademark registration with Apple, emailed the company, called the number provided on Apple’s copyright infringement page, sent a DMCA Takedown request, and filled out Apple’s required forms.

It has not received anything other than automated responses and the fake app remains live as of the time of publication


Software Development in Sri Lanka

Robotic Automations

Breaking down TikTok's legal arguments around free speech, national security claims | TechCrunch


Social media platform TikTok says that a bill banning the app in the U.S. is “unconstitutional” and that it will fight this latest attempt to restrict its use in court.

The bill in question, which President Joe Biden signed Wednesday, gives Chinese parent company ByteDance nine months to divest TikTok or face a ban on app stores to distribute the app in the U.S. The law received strong bipartisan support in the House and a majority Senate vote Tuesday, and is part of broader legislation including military aid for Israel and Ukraine.

“Make no mistake. This is a ban. A ban on TikTok and a ban on you and YOUR voice,” said TikTok CEO Shou Chew in a video posted on the app and other social media platforms. “Politicians may say otherwise, but don’t get confused. Many who sponsored the bill admit that a TikTok ban is their ultimate goal…It’s actually ironic because the freedom of expression on TikTok reflects the same American values that make the United States a beacon of freedom. TikTok gives everyday Americans a powerful way to be seen and heard, and that’s why so many people have made TikTok a part of their daily lives,” he added.

This isn’t the first time the U.S. government has attempted to ban TikTok, something several other countries have already implemented.

TikTok is based in Los Angeles and Singapore, but it’s owned by Chinese technology giant ByteDance. U.S. officials have warned that the app could be leveraged to further the interests of an “entity of concern.”

In 2020, former President Donald Trump issued an executive order to ban TikTok’s operations in the country, including a deadline for ByteDance to divest its U.S. operations. Trump also tried to ban new downloads of TikTok in the U.S. and barred transactions with ByteDance after a specific date.

Federal judges issued preliminary injunctions to temporarily block Trump’s ban while legal challenges proceeded, citing concerns about violation of First Amendment rights and lack of sufficient evidence demonstrating that TikTok posted a national security threat.

After Trump left office, Biden’s administration picked up the anti-TikTok baton. Today, the same core fundamentals are at stake. So why do Congress and the White House think the outcome will be different?

TikTok has not responded to TechCrunch’s inquiry as to whether it has filed a challenge in a district court, but we know it will because both Chew and the company have said so.

When the company makes it in front of a judge, what are its chances of success?

TikTok’s ‘unconstitutional’ argument against a ban

“In light of the fact that the Trump administration’s attempt in 2020 to force ByteDance to sell TikTok or face a ban was challenged on First Amendment grounds and was rejected as an impermissible ‘indirect regulation of informational materials and personal communications,’ coupled with last December’s federal court order enjoining enforcement of Montana’s law that sought to impose a statewide TikTok ban as a ‘likely’ First Amendment violation, I believe this latest legislation suffers from the same fundamental infirmity,” Douglas E. Mirell, partner at Greenberg Glusker, told TechCrunch.

In other words, both TikTok as a corporation and its users have First Amendment rights, which a ban threatens.

In May 2023, Montana Governor Greg Gianforte signed into law a bill that would ban TikTok in the state, saying it would protect Montanans’ personal and private data from the Chinese Communist Party. TikTok then sued the state over the law, arguing that it violated the Constitution and the state was overstepping by legislating matters of national security. The case is still ongoing, and the ban has been blocked while the lawsuit progresses.

Five TikTok creators separately sued Montana arguing the ban violated their First Amendment rights and won. This ruling thus blocked the Montana law from going into effect and essentially stopped the ban. A U.S. federal judge claimed the ban was an overstep of state power and also unconstitutional, likely a violation of the First Amendment. That ruling has set a precedent for future cases.

TikTok’s challenge to this latest federal bill will likely point to that court ruling, as well as the injunctions to Trump’s executive orders, as precedent for why this ban should be reversed.

TikTok may also argue that a ban would affect small and medium-sized businesses that use the platform to make a living. Earlier this month, TikTok released an economic impact report that claims the platform generated $14.7 billion for small- to mid-sized businesses last year, in anticipation of a ban and the need for arguments against it.

The threat to ‘national security’

Mirell says courts do give deference to the government’s claims about entities being a national security threat.

However, the Pentagon Papers case from 1971, in which the Supreme Court upheld the right to publish a classified Department of Defense study of the Vietnam War, establishes an exceptionally high bar for overcoming free speech and press protections.

“In this case, Congress’ failure to identify a specific national security threat posed by TikTok only compounds the difficulty of establishing a substantial, much less compelling, governmental interest in any potential ban,” said Mirell.

However, there is some cause for concern that the firewall between TikTok in the U.S. and its parent company in China isn’t as strong as it appears.

In June 2022, a report from BuzzFeed News found that U.S. data had been repeatedly accessed by staff in China, citing recordings from 80 TikTok internal meetings. There have also been reports in the past of Beijing-based teams ordering TikTok’s U.S. employees to restrict videos on its platform or that TikTok has told its moderators to censor videos that mentioned things like Tiananmen Square, Tibetan independence or banned religious group, Falun Gong.

In 2020, there were also reports that TikTok moderators were told to censor political speech and suppress posts from “undesirable users” – the unattractive, poor, and disabled — which shows the company is not afraid to manipulate the algorithm for its own purposes.

TikTok has largely brushed off such accusations, but following BuzzFeed’s reporting, the company said it would move all U.S. traffic to Oracle’s infrastructure cloud service to keep U.S. user data private. That agreement, part of a larger operation called “Project Texas,” is focused on furthering the separation of TikTok’s U.S. operations from China and employing an outside firm to oversee its algorithms. In its statements responding to Biden’s signing of the TikTok ban, the company has pointed to the billions of dollars invested to secure user data and keep the platform free from outside manipulation as a result of Project Texas and other efforts.

Yaqui Wang, China research director at political advocacy group Freedom House, believes the data privacy issue is real.

“There’s a structural issue that a lot of people who don’t work on China don’t understand, which is that by virtue of being a Chinese company – any Chinese company whether you’re public or private – you have to answer to the Chinese government,” Wang told TechCrunch, citing the Chinese government’s record for leveraging private companies for political purposes. “The political system dictates that. So [the data privacy issue] is one concern.”

“The other is the possibility of the Chinese government to push propaganda or suppress content that it doesn’t like and basically manipulate the content seen by Americans,” she continued.

Wang said there isn’t enough systemic information at present to prove the Chinese government has done this in regards to U.S. politics, but the threat is still there.

“Chinese companies are beholden to the Chinese government which absolutely has an agenda to undermine freedom around the world,” said Wang. She noted that while China doesn’t appear to have a specific agenda to suppress content or push propaganda in the U.S. today, tensions between the two countries continue to rise. If a future conflict comes to a head, China could “really leverage TikTok in a way they’re not doing now.”

Of course, American companies have been at the center of attempts by foreign entities to undermine democratic processes, as well. One need look no further than the Cambridge Analytica scandal and Russia’s use of Facebook political ads to influence the 2016 presidential election, as a high-profile example.

That’s why Wang says more important than a ban on TikTok is comprehensive data privacy law that protects user data from being exploited and breached by all companies.

“I mean if China wants Facebook data today, it can just purchase it on the market,” Wang points out.

TikTok’s chances in court are unclear

The government has a hard case to prove, and it’s not a sure decision one way or the other. If the precedent set by past court rulings is applied in TikTok’s future case, then the company has nothing to worry about. After all, as Mirell has speculated, the TikTok ban appears to have been added as a sweetener needed to pass a larger bill that would approve aid for Israel and Ukraine. However, the current administration might also have simply disagreed with how the courts have decided to limit TikTok in the past, and want to challenge that.

“When this case goes to court, the Government (i.e., the Department of Justice) will ultimately have to prove that TikTok poses an imminent threat to the nation’s national security and that there are no other viable alternatives for protecting that national security interest short of the divestment/ban called for in this legislation,” Mirell told TechCrunch in a follow-up email.

“For its part, TikTok will assert that its own (and perhaps its users’) First Amendment rights are at stake, will challenge all claims that the platform poses any national security risk, and will argue that the efforts already undertaken by both the Government (e.g., through its ban upon the use of TikTok on all federal government devices) and by TikTok itself (e.g., through its ‘Project Texas’ initiative) have effectively mitigated any meaningful national security threat,” he explained.

In December 2022, Biden signed a bill prohibiting TikTok from being used on federal government devices. Congress has also been considering a bill called the Restrict Act that gives the federal government more authority to address risks posed by foreign-owned technology platforms.

“If Congress didn’t think that [Project Texas] was sufficient, they could draft and consider legislation to enhance that protection,” said Mirell. “There are plenty of ways to deal with data security and potential influence issues well short of divestment, much less a ban.”




Software Development in Sri Lanka

Robotic Automations

Security bugs in popular phone-tracking app iSharing exposed users' precise locations | TechCrunch


Last week when a security researcher said he could easily obtain the precise location from any one of the millions of users of a widely used phone-tracking app, we had to see it for ourselves.

Eric Daigle, a computer science and economics student at the University of British Columbia in Vancouver, found the vulnerabilities in the tracking app iSharing as part of an investigation into the security of location-tracking apps. iSharing is one of the more popular location-tracking apps, claiming more than 35 million users to date.

Daigle said the bugs allowed anyone using the app to access anyone else’s coordinates, even if the user wasn’t actively sharing their location data with anybody else. The bugs also exposed the user’s name, profile photo and the email address and phone number used to log in to the app.

The bugs meant that iSharing’s servers were not properly checking that app users were only allowed to access their location data or someone else’s location data shared with them.

Location-tracking apps — including stealthy “stalkerware” apps — have a history of security mishaps that risk leaking or exposing users’ precise location.

In this case, it took Daigle only a few seconds to locate this reporter down to a few feet. Using an Android phone with the iSharing app installed and a new user account, we asked the researcher if he could pull our precise location using the bugs.

“770 Broadway in Manhattan?” Daigle responded, along with the precise coordinates of TechCrunch’s office in New York from where the phone was pinging out its location.

The security researcher pulled our precise location data from iSharing’s servers, even though the app was not sharing our location with anybody else. Image Credits: TechCrunch (screenshot)

Daigle shared details of the vulnerability with iSharing some two weeks earlier but had not heard anything back. That’s when Daigle asked TechCrunch for help in contacting the app makers. iSharing fixed the bugs soon after or during the weekend of April 20-21.

“We are grateful to the researcher for discovering this issue so we could get ahead of it,” iSharing co-founder Yongjae Chuh told TechCrunch in an email. “Our team is currently planning on working with security professionals to add any necessary security measures to make sure every user’s data is protected.”

iSharing blamed the vulnerability on a feature it calls groups, which allows users to share their location with other users. Chuh told TechCrunch that the company’s logs showed there was no evidence that the bugs were found prior to Daigle’s discovery. Chuh conceded that there “may have been oversight on our end,” because its servers were failing to check if users were allowed to join a group of other users.

TechCrunch held the publication of this story until Daigle confirmed the fix.

“Finding the initial flaw in total was probably an hour or so from opening the app, figuring out the form of the requests, and seeing that creating a group on another user and joining it worked,” Daigle told TechCrunch.

From there, he spent a few more hours building a proof-of-concept script to demonstrate the security bug.

Daigle, who described the vulnerabilities in more detail on his blog, said he plans to continue research in the stalkerware and location-tracking area.

Read more on TechCrunch:


To contact this reporter, get in touch on Signal and WhatsApp at +1 646-755-8849, or by email. You can also send files and documents via SecureDrop.


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber