From Digital Age to Nano Age. WorldWide.

Tag: push

Robotic Automations

Ofcom to push for better age verification, filters and 40 other checks in new online child safety code | TechCrunch


Ofcom is cracking down on Instagram, YouTube and 150,000 other web services to improve child safety online. A new Children’s Safety Code from the U.K. Internet regulator will push tech firms to run better age checks, filter and downrank content, and apply around 40 other steps to assess harmful content around subjects like suicide, self harm and pornography, to reduce under-18’s access to it. Currently in draft form and open for feedback until July 17, enforcement of the Code is expected to kick in next year after Ofcom publishes the final in the spring. Firms will have three months to get their inaugural child safety risk assessments done after the final Children’s Safety Code is published.

The Code is significant because it could force a step-change in how Internet companies approach online safety. The government has repeatedly said it wants the U.K. to be the safest place to go online in the world. Whether it will be any more successful at preventing digital slurry from pouring into kids’ eyeballs than it has actual shit from polluting the country’s waterways remains to be seen. Critics of the approach suggest the law will burden tech firms with crippling compliance costs and make it harder for citizens to access certain types of information.

Meanwhile, failure to comply with the Online Safety Act can have serious consequences for UK-based web services large and small, with fines of up to 10% of global annual turnover for violations, and even criminal liability for senior managers in certain scenarios.

The guidance puts a big focus on stronger age verification. Following on from last year’s draft guidance on age assurance for porn sites, age verification and estimation technologies deemed “accurate, robust, reliable and fair” will be applied to a wider range of services as part of the plan. Photo-ID matching, facial age estimation and reusable digital identity services are in; self-declaration of age and contractual restrictions on the use of services by children are out.

That suggests Brits may need to get accustomed to proving their age before they access a range of online content — though how exactly platforms and services will respond to their legal duty to protect children will be for private companies to decide: that’s the nature of the guidance here.

The draft proposal also sets out specific rules on how content is handled. Suicide, self-harm and pornography content — deemed the most harmful — will have to be actively filtered (i.e. removed) so minors do not see it. Ofcom wants other types of content such as violence to be downranked and made far less visible in children’s feeds. Ofcom also said it may expect services to act on potentially harmful content (e.g. depression content). The regulator told TechCrunch it will encourage firms to pay particular attention to the “volume and intensity” of what kids are exposed to as they design safety interventions. All of this demands services be able to identify child users — again pushing robust age checks to the fore.

Ofcom previously named child safety as its first priority in enforcing the UK’s Online Safety Act — a sweeping content moderation and governance rulebook that touches on harms as diverse as online fraud and scam ads; cyberflashing and deepfake revenge porn; animal cruelty; and cyberbullying and trolling, as well as regulating how services tackle illegal content like terrorism and child sexual abuse material (CSAM).

The Online Safety Bill passed last fall, and now the regulator is busy with the process of implementation, which includes designing and consulting on detailed guidance ahead of its enforcement powers kicking in once parliament approves Codes of Practice it’s cooking up.

With Ofcom estimating around 150,000 web services in scope of the Online Safety Act, scores of tech firms will, at the least, have to assess whether children are accessing their services and, if so, take steps to identify and mitigate a range of safety risks. The regulator said it’s already working with some larger social media platforms where safety risks are likely to be greatest, such as Facebook and Instagram, to help them design their compliance plans.

Consultation on the Children’s Safety Code

In all, Ofcom’s draft Children’s Safety Code contains more than 40 “practical steps” the regulator wants web services to take to ensure child protection is enshrined in their operations. A wide range of apps and services are likely to fall in-scope — including popular social media sites, games and search engines.

“Services must prevent children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges,” Ofcom wrote in a summary of the consultation.

“In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it,” it added in a press release Monday. “In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.”

Ofcom’s current proposal suggests that almost all services will have to take mitigation measures to protect children. Only those deploying age verification or age estimation technology that is “highly effective” and used to prevent children from accessing the service (or the parts of it where content poses risks to kids) will not be subject to the children’s safety duties.

Those who find — on the contrary — that children can access their service will need to carry out a follow-on assessment known as the “child user condition”. This requires them to assess whether “a significant number” of kids are using the service and/or are likely to be attracted to it. Those that are likely to be accessed by children must then take steps to protect minors from harm, including conducting a Children’s Risk Assessment and implementing safety measures (such as age assurance, governance measures, safer design choices and so on) — as well as applying an ongoing review of their approach to ensure they keep up with changing risks and patterns of use. 

Ofcom does not define what “a significant number” means in this context — but “even a relatively small number of children could be significant in terms of the risk of harm. We suggest service providers should err on the side of caution in making their assessment.” In other words, tech firms may not be able to eschew child safety measures by arguing there aren’t many minors using their stuff.

Nor is there a simple one-shot fix for services that fall in scope of the child safety duty. Multiple measures are likely to be needed, combined with ongoing assessment of efficacy.

“There is no single fix-all measure that services can take to protect children online. Safety measures need to work together to help create an overall safer experience for children,” Ofcom wrote in an overview of the consultation, adding: “We have proposed a set of safety measures within our draft Children’s Safety Codes, that will work together to achieve safer experiences for children online.” 

Recommender systems, reconfigured

Under the draft Code, any service that operates a recommender system — a form of algorithmic content sorting, tracking user activity — and is at “higher risk” of showing harmful content, must use “highly-effective” age assurance to identify who their child users are. They must then configure their recommender algorithms to filter out the most harmful content (i.e. suicide, self harm, porn) from the feeds of users it has identified as children, and reduce the “visibility and prominence” of other harmful content.

Under the Online Safety Act, suicide, self harm, eating disorders and pornography are classed “primary priority content”. Harmful challenges and substances; abuse and harassment targeted at people with protected characteristics; real or realistic violence against people or animals; and instructions for acts of serious violence are all classified “priority content.” Web services may also identify other content risks they feel they need to act on as part of their risk assessments.

In the proposed guidance, Ofcom wants children to be able to provide negative feedback directly to the recommender feed — in order that it can better learn what content they don’t want to see too.

Content moderation is another big focus in the draft Code, with the regulator highlighting research showing content that’s harmful to children is available on many services at scale and which it said suggests services’ current efforts are insufficient.

Its proposal recommends all “user-to-user” services (i.e. those allowing users to connect with each other, such as via chat functions or through exposure to content uploads) must have content moderation systems and processes that ensure “swift action” is taken against content harmful to children. Ofcom’s proposal does not contain any expectations that automated tools are used to detect and review content. But the regulator writes that it’s aware large platforms often use AI for content moderation at scale and says it’s “exploring” how to incorporate measures on automated tools into its Codes in the future.

“Search engines are expected to take similar action,” Ofcom also suggested. “And where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off must filter out the most harmful content.”

“Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained,” it added.

The draft Code also includes measures it hopes will ensure “strong governance and accountability” around children’s safety inside tech firms. “These include having a named person accountable for compliance with the children’s safety duties; an annual senior-body review of all risk management activities relating to children’s safety; and an employee Code of Conduct that sets standards for employees around protecting children,” Ofcom wrote.

Facebook- and Instagram-owner Meta was frequently singled out by ministers during the drafting of the law for having a lax attitude to child protection. The largest platforms may be likely to pose the greatest safety risks — and therefore have “the most extensive expectations” when it comes to compliance — but there’s no free pass based on size.

Services cannot decline to take steps to protect children merely because it is too expensive or inconvenient — protecting children is a priority and all services, even the smallest, will have to take action as a result of our proposals,” it warned.

Other proposed safety measures Ofcom highlights include suggesting services provide more choice and support for children and the adults who care for them — such as by having “clear and accessible” terms of service; and making sure children can easily report content or make complaints.

The draft guidance also suggests children are provided with support tools that enable them to have more control over their interactions online — such an option to decline group invites; block and mute user accounts; or disable comments on their own posts.

The UK’s data protection authority, the Information Commission’s Office, has expected compliance with its own age-appropriate children’s design Code since September 2021 so it’s possible there may be some overlap. Ofcom for instance notes that service providers may already have assessed children’s access for a data protection compliance purpose — adding they “may be able to draw on the same evidence and analysis for both.”

Flipping the child safety script?

The regulator is urging tech firms to be proactive about safety issues, saying it won’t hesitate to use its full range of enforcement powers once they’re in place. The underlying message to tech firms is get your house in order sooner rather than later or risk costly consequences.

“We are clear that companies who fall short of their legal duties can expect to face enforcement action, including sizeable fines,” it warned in a press release.

The government is rowing hard behind Ofcom’s call for a proactive response, too. Commenting in a statement today, the technology secretary Michelle Donelan said: “To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines — step up to meet your responsibilities and act now.”

“The government assigned Ofcom to deliver the Act and today the regulator has been clear; platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online,” she added. “Once in place these measures will bring in a fundamental change in how children in the UK experience the online world.

“I want to assure parents that protecting children is our number one priority and these laws will help keep their families safe.”

Ofcom said it wants its enforcement of the Online Safety Act to deliver what it couches as a “reset” for children’s safety online — saying it believes the approach it’s designing, with input from multiple stakeholders (including thousands of children and young people), will make a “significant difference” to kids’ online experiences.

Fleshing out its expectations, it said it wants the rulebook to flip the script on online safety so children will “not normally” be able to access porn and will be protected from “seeing, and being recommended, potentially harmful content”.

Beyond identity verification and content management, it also wants the law to ensure kids won’t be added to group chats without their consent; and wants it to make it easier for children to complain when they see harmful content, and be “more confident” that their complaints will be acted on.

As it stands, the opposite looks closer to what UK kids currently experience online, with Ofcom citing research over a four-week period in which a majority (62%) of children aged 13-17 reported encountering online harm and many saying they consider it an “unavoidable” part of their lives online.

Exposure to violent content begins in primary school, Ofcom found, with children who encounter content promoting suicide or self-harm characterizing it as “prolific” on social media; and frequent exposure contributing to a “collective normalisation and desensitisation”, as it put it. So there’s a huge job ahead for the regulator to reshape the online landscape kids encounter.

As well as the Children’s Safety Code, its guidance for services includes a draft Children’s Register of Risk, which it said sets out more information on how risks of harm to children manifest online; and draft Harms Guidance which sets out examples and the kind of content it considers to be harmful to children. Final versions of all its guidance will follow the consultation process, a legal duty on Ofcom. It also told TechCrunch that it will be providing more information and launching some digital tools to further support services’ compliance ahead of enforcement kicking in.

“Children’s voices have been at the heart of our approach in designing the Codes,” Ofcom added. “Over the last 12 months, we’ve heard from over 15,000 youngsters about their lives online and spoken with over 7,000 parents, as well as professionals who work with children.

“As part of our consultation process, we are holding a series of focused discussions with children from across the UK, to explore their views on our proposals in a safe environment. We also want to hear from other groups including parents and carers, the tech industry and civil society organisations — such as charities and expert professionals involved in protecting and promoting children’s interests.”

The regulator recently announced plans to launch an additional consultation later this year which it said will look at how automated tools, aka AI technologies, could be deployed to content moderation processes to proactively detect illegal content and content most harmful to children — such as previously undetected CSAM and content encouraging suicide and self-harm.

However, there is no clear evidence today that AI will be able to improve detection efficacy of such content without causing large volumes of (harmful) false positives. It thus remains to be seen whether Ofcom will push for greater use of such tech tools given the risks that leaning on automation in this context could backfire.

In recent years, a multi-year push by the Home Office geared towards fostering the development of so-called “safety tech” AI tools — specifically to scan end-to-end encrypted messages for CSAM — culminated in a damning independent assessment which warned such technologies aren’t fit for purpose and pose an existential threat to people’s privacy and the confidentiality of communications.

One question parents might have is what happens on a kid’s 18th birthday, when the Code no longer applies? If all these protections wrapping kids’ online experiences end overnight, there could be a risk of (still) young people being overwhelmed by sudden exposure to harmful content they’ve been shielded from until then. That sort of shocking content transition could itself create a new online coming-of-age risk for teens.

Ofcom told us future proposals for larger platforms could be introduced to mitigate this sort of risk.

“Children are accepting this harmful content as a normal part of the online experience — by protecting them from this content while they are children, we are also changing their expectations for what’s an appropriate experience online,” an Ofcom spokeswoman responded when we asked about this. “No user, regardless of their age, should accept to have their feed flooded with harmful content. Our phase 3 consultation will include further proposals on how the largest and riskiest services can empower all users to take more control of the content they see online. We plan to launch that consultation early next year.”


Software Development in Sri Lanka

Robotic Automations

Inside Mercury's competitive push into software and Ramp's potential M&A targets | TechCrunch


Welcome to TechCrunch Fintech! This week, we’re looking at Mercury’s latest expansions, wallet-as-a-service startup Ansa’s raise and more!

To get a roundup of TechCrunch’s biggest and most important fintech stories delivered to your inbox every Tuesday at 8 a.m. PT, subscribe here. (New day and time, same awesome newsletter!)

The big story

Digital banking startup Mercury is layering software onto its bank accounts, giving its business customers the ability to pay bills, invoice customers and reimburse employees, the company has told TechCrunch exclusively. The additional features put the company in even more direct competition with the likes of Brex and Ramp, two rival fintechs that have for years been fighting for market share in an increasingly crowded space. Mercury says that it has over 200,000 customers sending $4 billion in outgoing payments every month via its platform and that this move is a natural one for the seven-year-old company.

Analysis of the week

CB Insights took it upon itself to identify 85 potential acquisition targets for Ramp “given its heightened interest in M&A.” Here are a few examples: Greycroft-backed Streamlined, which does accounts receivable (AR) automation and whose $4 million raise TechCrunch covered here; Oddr, which is focused on invoice-to-cash management for the legal sector; Pactum, which does AI vendor negotiation; and OpStart, a startup valued at $10 million in 2022 that offers “financial operations for startups.” So far Ramp has acquired Cohere, Buyer and Venue.

Dollars and cents

We first covered Ansa in 2023 when they came out of stealth announcing a $5.4 million raise. Last week, the buzzy fintech shared with TC exclusively that it had raised another $14 million to grow its “wallet-as-a-service” business. We were impressed with the fact that 95.6% of the investors in its Series A round were female and by the company’s traction. Read more here.

Flipping houses is not for the faint of heart, no matter how fun or easy HGTV might make it seem. One startup wants to make the process less complicated by offering a different way to borrow money to fund such a purchase. Backflip offers a service to real estate investors for securing short-term loans. Beyond helping users secure financing, Backflip’s tech also helps investors source, track, comp and evaluate potential investments. Think of it as a cross between Zillow and Shopify. And it just raised $15 million.

What else we’re writing

Hans Tung, a managing partner at Notable Capital, formerly GGV Capital, has a lot of thoughts on the state of venture capital today. We recently brought him on TechCrunch’s Equity podcast to discuss valuations, why founders need to play the long game and the reason some VC firms are struggling more than others. We also delved deep into the reasons he’s still bullish on fintech, and which sectors in the fintech space have him especially excited. Check out interview excerpts and the actual podcast here.

High-interest headlines

The inside story of Chime, America’s biggest digital bank

Karma Wallet acquires sustainability marketplace DoneGood ahead of card and membership programme launch

Marqeta expands Uber Eats partnership

Nayax acquires VMtecnologia, expands in Latin America

Federal prosecutors are examining financial transactions at Block, owner of Cash App and Square

RIA custodian Altruist valued at over $1.5 bln in latest funding round

Want to reach out with a tip? Email me at maryann@techcrunch.com or send me a message on Signal at 408.204.3036. You can also send a note to the whole TechCrunch crew at tips@techcrunch.com. For more secure communications, click here to contact us, which includes SecureDrop (instructions here) and links to encrypted messaging apps.


Software Development in Sri Lanka

Robotic Automations

Indian ride-hailing giant Ola cuts 180 jobs in profitability push | TechCrunch


Ola has let go its chief executive officer Hemant Bakshi, merely four months after making the appointment, and is cutting about 180 jobs, a source familiar with the matter told TechCrunch. The move from the Indian ride-hailing startup is aimed at “improving profitability,” its founder Bhavish Aggarwal told employees in an email Monday seen by TechCrunch.

The Bengaluru-headquartered startup, which counts SoftBank and Tiger Global among its backers, is undergoing a “restructuring exercise” to gear up for its “next phase of growth,” Aggarwal, pictured above, wrote in the email.

The move follows Ola shutting down its operations in the U.K., Australia and New Zealand earlier this month. Bakshi, a former HUL executive, was appointed as Ola chief executive in January this year.

Ola is looking to go public later this year, months after the public debut of Ola Electric, a startup that spun out of the ride-hailing firm. Both the startups were founded by Aggarwal, who has since also founded the AI startup Krutrim, which became a unicorn in January. Ola Electric is seeking to raise more than $650 million in its initial public offering, according to paperwork filed by the firm.

You can read Aggarwal’s Monday email to staff in its entirety below.

Dear All,

In line with our vision to serve 1 Billion Indians, and our commitment to drive sustainable growth and enhance efficiency across the organization, we are undergoing a restructuring exercise aimed at improving profitability and preparing ourselves for the next phase of growth.

We have made substantial investments in areas of AI & Technology which has led to significant cost advantages and we will continue to focus on these areas to ensure that we build cutting edge products and services across our business verticals.

These changes will result in certain roles within the company becoming redundant. This decision was not made lightly, and we are committed to supporting those impacted during this transition period.

Hemant will be stepping down from his role as CEO to pursue opportunities outside the company. We extend our gratitude to Hemant for his contributions and wish him the best in his future endeavors.

I am very confident of the strong leadership team which we have built over the last few years at Ola Consumer, who bring in a lot of experience and expertise to their respective roles. They will collaborate closely with me to drive technology-led growth.

We are committed to transparency and open communication throughout this process. Our HR team would be available to address any queries or concerns you may have.

Thank you for your unwavering dedication and commitment to Ola.

Best,
Bhavish


Software Development in Sri Lanka

Robotic Automations

Tesla earnings week spotlights EV price cuts, 'balls to the wall' autonomy push | TechCrunch


Tesla investors, still digesting a 43% drop in share price since the beginning of the year, are gearing up for what will likely be unimpressive financial results for the first quarter and a shift in priorities for CEO Elon Musk, who is making more moves to go “balls to the wall for autonomy.”

Tesla is expected to report earnings after markets close Tuesday. The company’s earnings call is scheduled for 5:30 pm ET.

Tesla shares rose Tuesday morning more than 2% ahead earnings, a brief rosy sign amid an otherwise downward trend that’s accelerated since early March. The falling share price comes as Musk pushes forward with a renewed focus on automated driving on two fronts: selling more customers on its advanced driver assistance system known as “Full Self-Driving,” or (FSD) and a moonshot effort to bring a robotaxi to market.

Over the weekend, Tesla dropped the price of its Full Self-Driving (FSD) advanced driver-assistance system to $8,000, down from $12,000. That price cut is in addition to last week’s drop of the FSD monthly subscription to $99, down from $199. The push to get FSD into more cars could be a bid to collect more data as Tesla works to boost the neural networks that will power fuller-scale autonomy. FSD today can perform many driving tasks in cities and on highways, but still requires a human to remain alert with their hands on the wheel in case the system requires a takeover.

Tesla faces narrowing profits as it places a major and expensive bet on autonomous driving technology. Last week, Tesla laid off 10% of its staff in a move to reduce costs in preparation for the company’s “next growth phase,” per an email Musk sent to all employees.

Earlier this month, Musk abruptly announced on X that Tesla was pausing the development of its $25,000 electric vehicle in favor of a robotaxi that he promised to reveal in August. Sources within Tesla have confirmed to TechCrunch that they didn’t have prior warning from Musk on this sudden shift and that internal restructurings reflect a new ethos that puts robotaxi development at front and center.

All of this is happening as Tesla zigzags on its EV pricing strategy.

Last week, Tesla ditched EV inventory price discounts, but over the weekend slashed prices on the Model 3 and Model Y by as much as $2,000 in the U.S., China and Germany. As we saw during the first quarter of 2023, those price cuts are taking their toll on Tesla’s income and margins.

The company will need to convince investors that its shift in priority to autonomous vehicles is a silver lining in the cloud of declining margins, rather than just smoke and mirrors.

What to expect at Tesla’s Q1 2024 earnings

Tesla’s lower first-quarter delivery figures combined with price cuts are ingredients for a smaller profit pie. And analysts seem to agree.

Analysts polled by Yahoo Finance expect a profit of $0.48 per share on $20.94 billion in revenue. As a reminder, Tesla generated $25.17 billion revenue in Q4 and $23.3 billion in the first quarter of 2023.

Tesla delivered 386,810 vehicles in the first quarter of 2024, down 20% from the 484,507 it delivered in the final quarter of 2023. It’s worth noting that this wasn’t just a quarter-over-quarter blip. Tesla delivered fewer cars than the first quarter of 2023 — the first year-over-year drop in sales in three years.

Tesla’s Q4 results show a company already grappling with shrinking profit margins due to its price-cutting strategy, rising costs of its Cybertruck production launch and other R&D expenses.

The automaker reported net income, on a GAAP basis, of $7.9 billion in the fourth quarter — an outsized number caused by a one-time, non-cash tax benefit of $5.9 billion. The company’s operating income and its earnings on an adjusted basis provided a clearer picture of its financial performance.

Tesla reported operating income of $2.06 billion in the fourth quarter, a 47% decrease from the same year-ago period. On an adjusted basis, the company earned $3.9 billion, a 27% drop from the same period last year.

The question is whether Tesla can prevent that profit pie from shrinking to profit muffin.

Since Tesla reported its Q1 2024 production and delivery numbers, the company has continued to pull various financial levers aimed at attracting new buyers and inducing existing customers to pay for FSD — all while reducing costs and maintaining profit margins.

Those opposing goals coupled with Musk’s “wartime CEO mode” status are bound to make the Q1 earnings call entertaining. Beyond that potential theater, there are pressing long-term questions about how Tesla delivers on autonomy and if it will be enough to convince investors that it can still lead and innovate.




Software Development in Sri Lanka

Robotic Automations

Tesla earnings week spotlights price cuts, Elon's 'balls to the wall' autonomy push | TechCrunch


As Tesla gears up to report what will likely be unimpressive financial results for the first quarter on Tuesday, the company is making more moves to go “balls to the wall for autonomy,” as CEO Elon Musk put it last week in a post on X

Over the weekend, Tesla dropped the price of its Full Self-Driving (FSD) advanced driver assistance system to $8,000, down from $12,000. That price cut is in addition to last week’s drop of the FSD monthly subscription to $99, from $199. The push to get FSD into more cars could be a bid to collect more data as Tesla works to boost the neural networks that will power fuller-scale autonomy. FSD today can perform many driving tasks in cities and on highways, but still requires a human to remain alert with their hands on the wheel in case the system requires a takeover. 

Tesla faces narrowing profits as it places a major and expensive bet on autonomous driving technology. Last week, Tesla laid off 10% of its staff in a move to reduce costs in preparation for the company’s “next growth phase,” per an email Musk sent to all employees. 

Earlier this month, Musk abruptly announced on X that Tesla was pausing the development of its $25,000 electric vehicle in favor of a robotaxi that he promised to reveal in August. Sources within Tesla have confirmed to TechCrunch that they didn’t have prior warning from Musk on this sudden shift, and that internal restructurings reflect a new ethos that puts robotaxi development at front and center. 

All of this is happening as Tesla zigzags on its EV pricing strategy. 

Last week, Tesla ditched EV inventory price discounts, but over the weekend slashed prices on Model 3 and Model Ys by as much as $2,000 in the U.S., China and Germany. As we saw during the first quarter of 2023, those price cuts are taking their toll on Tesla’s income and margins

Tesla is scheduled to report earnings after markets close April 23. Musk has previously said that without autonomy, Tesla is “basically worth zero.” 

The company will need to convince investors tomorrow that its shift in priority to autonomous vehicles is a silver lining in the cloud of declining margins, rather than just smoke and mirrors. 

Since Musk laid off staff and announced that Tesla would be going hard on autonomy, Tesla’s share price has dropped almost 10%. Shares have fallen over 42% since the start of the year.

What to expect at Tesla’s Q1 2024 earnings

Tesla’s lower first-quarter delivery figures combined with price cuts are ingredients for a smaller profit pie. And analysts seem to agree. 

Analysts polled by Yahoo Finance expect a profit of $0.48 per share on 20.94 billion in revenue. As a reminder, Tesla generated $25.17 billion revenue in Q4 and $23.3 billion in the first quarter of 2023. 

Tesla delivered 386,810 vehicles in the first quarter of 2024, down 20% from the 484,507 it delivered in the final quarter of 2023. It’s worth noting that this wasn’t just a quarter over quarter blip. Tesla delivered fewer cars than the first quarter of 2023 — the first year-over-year drop in sales in three years.

Tesla’s Q4 results showed a company already grappling with shrinking profit margins due to its price cutting strategy, rising costs of its Cybertruck production launch and other R&D expenses. 

The automaker reported net income, on a GAAP basis, of $7.9 billion in the fourth quarter — an outsized number caused by a one-time non-cash tax benefit of $5.9 billion. The company’s operating income and its earnings on an adjusted basis provided a clearer picture of its financial performance.

Tesla reported operating income of $2.06 billion in the fourth quarter, a 47% decrease from the same year-ago period. On an adjusted basis, the company earned $3.9 billion, a 27% drop from the same period last year.

The question is whether Tesla can prevent that profit pie from shrinking to profit muffin. 

Since Tesla reported its Q1 2024 production and delivery numbers, the company has continued to pull various financial levers aimed at attracting new buyers and inducing existing customers to pay for FSD — all while reducing costs and maintaining profit margins. 

Those opposing goals coupled with Musk’s “wartime CEO mode” status are bound to make the Q1 earnings call entertaining. Beyond that potential theater, there are pressing long-term questions about how Tesla delivers on autonomy and if it will be enough to convince investors that it can still lead and innovate. 




Software Development in Sri Lanka

Robotic Automations

Dark Space is building a rocket-powered boxing glove to push debris out of orbit | TechCrunch


Paris-based Dark Space is taking on the dual problems of debris and conflict in orbit with their mobile platform designed to launch, attach to, and ultimately deorbit uncooperative objects in space.

Dark CEO Clyde Laheyne said the company is aiming to become the “S.W.A.T. team of space.”

The three-year-old startup is developing Interceptor, a spacecraft that is essentially a rocket-powered boxing glove that can be launched on short order to gently punch a wayward object out of its orbit.

Interceptor is itself launched from a specially outfitted aircraft. Much like a Virgin Galactic launch, the aircraft will take the rocket above the tumultuous lower atmosphere, where it can be released and ignited. Once the rocket reaches the vicinity of the target object, the spacecraft detaches and uses onboard sensors and propulsion to find and approach it. When it’s lined up correctly, Interceptor pushes against the object with its cushioned “effector,” eventually deorbiting it.

“All the space sector is organized to do planned, long missions … but orbital defense is more about unplanned, short missions,” Laheyne said. In that sense, Interceptor “is more like an air defense missile,” he explained. “It has to be ready all the time. There is no excuse that is viable to not use it.”

Unlike an actual missile or anti-satellite weapon, however, the gentle strike of the Interceptor doesn’t produce a debris field or any other dangerous, unpredictable effects.

Dark Space was founded by Laheyne and CTO Guillaume Orvain, engineers who cut their teeth at multi-national missile developer MBDA. This work experience shines through in the Interceptor concept, which is being designed to operate on-call, similar to missile systems. That’s also why Dark is developing its own launching platform: to ensure readiness for defense, civil and commercial companies at a moment’s notice, Laheyne said.

Dark Space co-founders Clyde Laheyne and Guillaume Orvain. Image credit: Dark

Dark closed a $5 million funding round in 2021, with the cap table composed of European investors including lead investor Eurazeo. The team closed a $6 million extension yesterday, including participation from its first U.S.-based investor, Long Journey Ventures. (That fund is led by Arielle Zuckerberg, the younger sister of Meta founder Mark Zuckerberg.)

The company has a lot of work ahead before it comes close to removing something like a defunct rocket second stage from orbit. Dark has been focused on developing critical systems, like the cryogenic engine and software. Now the team is shifting its focus to developing the technologies needed for the type of unplanned, quick missions Interceptor will execute, like long distance detecting and tracking, autonomous flight algorithms, and a system for reliable controlled reentry.

The team must also retrofit an aircraft — which Laheyne estimated could cost $50 million, or around the price of building a new launch pad — and have the entire platform ready for a demonstration mission in 2026.

That mission would validate many of the core technologies of the full-scale platform, though it won’t actually aim to deorbit an object, just touch one. Even this is incredibly ambitious: no company has yet cracked so-called rendezvous and proximity operations, meaning moving close to another object in space and interacting with it.

The second demonstration mission, which is currently planed for 2027, will include a deorbit attempt. If all goes to plan, the company would start deorbiting objects for allied civil agencies. As far as defense customers go, “hopefully we don’t have to use it,” Laheyne said.

“I’ve been doing missiles for years, and it’s always the same topic: if you use it first, it’s an act of war. If you’re second, it’s an act of defense. If you can do it, and people know you can do it, it’s dissuasion,” he said. “The ideal is dissuasion, the system that makes the conflict unthinkable.”


Software Development in Sri Lanka

Back
WhatsApp
Messenger
Viber