Bytes to Insights: Weekly News Digest for the Week of November 24, 2024

This week’s Bytes to Insights Weekly News Digest is also available as a podcast if you prefer to listen on the go or enjoy audio format:
OpenAI announced its ambition to expand its user base to 1 billion within the following year. This growth strategy includes the introduction of new AI products, the construction of dedicated data centers, and a partnership with Apple. Currently, OpenAI’s ChatGPT boasts 250 million weekly users. The company is also developing AI agents and an AI-powered search engine to support this expansion. Following a $6 billion investment, OpenAI is seeking additional funding to continue its transition toward a for-profit model despite significant operational costs.
AI-driven chatbots played a pivotal role in the U.S. Black Friday online sales, which reached a record $10.8 billion — a 10.2% increase from 2023. Retailers utilizing AI experienced a 9% rise in conversion rates, as these technologies assisted customers in efficiently locating deals and completing purchases. Mobile shopping also surged, with smartphones accounting for 55% of online sales.
The Commonwealth Bank of Australia (CBA) invested substantially in AI to enhance its operations. The bank’s AI systems have reduced call center wait times by 40% and a 50% reduction in scam losses. CBA is exploring further automation of processes, potentially impacting thousands of call center positions, as part of a broader strategy to improve productivity and profitability.
A Danish study by Digitalt Ansvar revealed that Instagram’s algorithm promotes the growth of self-harm networks and fails to effectively detect and remove self-harm content, including images of blood and razor blades. Despite Meta’s claims about the accuracy of their AI tools, the study showed that posts related to self-harm were not removed during the experiment. Additionally, the algorithm recommended other profiles connected to self-harm networks to young users. This discovery, contradicting Instagram’s policies, presents significant safety risks to minors.
Colonel Richard Kemp, former head of the UK’s COBRA intelligence committee, warned that Europe is unprepared for modern warfare’s technological advancements, including AI-driven drone swarms and hypersonic missiles. He emphasized developing offensive and defensive capabilities to address these emerging threats.
With the impending Republican control of the U.S. government, the future of AI regulations remains uncertain. President-elect Donald Trump plans to rescind previous AI executive orders to reduce regulatory barriers and emphasize free speech. This shift could impact policies related to AI’s use in elections and misinformation, with concerns about balancing innovation with safety and ethical considerations.
Former Google CEO Eric Schmidt expressed concerns that AI chatbots, particularly those designed as “perfect girlfriends,” could exacerbate loneliness among young men. He warned that over-reliance on AI companions might hinder social development and emphasized the importance of parental monitoring of technology use.
An AI artist, Miguel Ángel Omaña Rojas, utilized artificial intelligence to create a realistic image of the teenage Virgin Mary before the birth of Jesus. By analyzing the original image of the Virgin of Guadalupe, Omaña reconstructed her facial features and expressions, contributing to the intersection of AI and religious art.
OpenAI is reportedly discussing a partnership with Samsung Electronics to integrate AI features into Samsung devices. This move could challenge Google’s dominance in the mobile AI market.
Amazon is developing its own AI chips to reduce reliance on Nvidia. The company invests heavily in this initiative at its engineering lab in Austin, Texas.
A report by APACMed and Bain & Company highlights AI’s potential in Asia-Pacific’s medical technology industry. The focus is tailoring AI solutions to meet local market demands and developing region-specific data models.
CBS News reports on the working conditions of “humans in the loop” who label data for AI training. These workers, often in developing countries, claim to be underpaid and exposed to harmful content while working for major tech companies.
Stanford professor Jeff Hancock, an expert on AI and misinformation, has been accused of using AI to fabricate an expert declaration in a Minnesota court case. The case involves a ban on political deepfakes, and Hancock’s testimony is alleged to cite a non-existent study.
Reuters reports that the rise of AI is reshaping investment strategies with a shift towards hardware-intensive projects. Big tech companies are projected to spend over $200 billion on AI infrastructure by 2025
These developments highlight AI’s expanding influence across various sectors, including technology, commerce, defense, regulation, social dynamics, and the arts.
Thank you for being a part of this fascinating journey.
Bear Net AI is a proud member of the Association for the Advancement of Artificial Intelligence, and a signatory to the Asilomar AI Principles, dedicated to the responsible and ethical development of AI.
BearNetAI. From Bytes to Insights. AI Simplified.
© 2024 BearNetAI LLC