Wearable Robotics Transforming Human Mobility in Walking and Running

Wearable robotics, including Nike’s Project Amplify and the Hypershell X exoskeleton, are transforming how we walk and run, aiming to enhance movement rather than replace it.

In recent years, the field of robotics has expanded beyond the confines of factories and laboratories, making its way into our daily lives. Wearable robotics, which include powered footwear and lightweight exoskeletons, are emerging as a new consumer category designed to assist movement rather than replace physical effort.

Historically, innovations in sports technology have focused on enhancing speed and performance, often benefiting elite athletes. However, the focus is shifting towards accessibility and support for everyday users. Nike’s Project Amplify exemplifies this trend. Developed in collaboration with robotics partner Dephy, this system integrates a carbon plate within the shoe and a motorized cuff worn above the ankle. The cuff uses sensors to monitor stride patterns in real time, providing subtle assistance that feels natural and smooth, rather than forcing movement.

Previous attempts at creating powered footwear faced challenges due to the weight of batteries and motors, which made the devices feel cumbersome and unbalanced. Modern designs have addressed these issues by relocating energy storage to the ankle or hips, thereby reducing strain on the feet and improving overall balance. Enhanced battery technology and advanced motion sensors allow these systems to adapt to users’ strides dynamically, making the experience feel like an extension of the body. Nike aims for a commercial release of Project Amplify around 2028.

However, Nike is not the only player in this evolving market. The Hypershell X is another notable example, designed as a lightweight outdoor exoskeleton for hikers and long-distance walkers. This system wraps around the waist and legs, employing small motors to alleviate fatigue during climbs and on uneven terrain. The goal is straightforward: to help users go farther without feeling drained. Hypershell has also introduced the X Ultra, a more robust version tailored for steeper terrains and longer excursions, providing stronger assistance while remaining compact enough to wear under standard outdoor gear.

Dnsys has also entered the market with the X1 all-terrain exoskeleton, aimed at hikers and outdoor enthusiasts. Unlike earlier lab prototypes, the X1 has been successfully sold through crowdfunding and direct online orders, marking it as one of the early consumer-ready entries in the wearable robotics space.

Another innovative product is WIM from WIRobotics, a wearable robot that weighs approximately 3.5 pounds and supports natural hip movement while walking. This device is targeted at older adults, active individuals, and those recovering from minor injuries, providing assistance without the bulkiness of traditional medical devices.

The medical applications of wearable robotics have been developing for a longer time. Companies like Ekso Bionics and ReWalk have created powered exoskeletons that assist individuals with spinal cord injuries or strokes in standing and walking. These systems are primarily used in rehabilitation clinics and select personal mobility programs, demonstrating how wearable robotics have evolved from medical settings to consumer-oriented designs.

What unites these diverse products is a common goal: to actively assist movement rather than merely track it. Many individuals face barriers to physical activity that are not solely related to injury; hesitation often plays a significant role. Concerns about knee pain, fatigue, or the fear of slowing down others can deter people from engaging in physical activity. Wearable robotics aim to bridge this confidence gap by reducing fatigue and supporting joints, making movement feel more attainable for those who might otherwise avoid it.

Comparatively, the rise of e-bikes serves as a relevant analogy. Electric assistance has not eliminated cycling; instead, it has broadened the demographic of people who feel comfortable riding a bike. Similarly, powered footwear and wearable robotics could democratize walking and running, making these activities more accessible to a wider audience.

For some, this technology might mean replacing short car trips with walking, while for older adults, it could facilitate prolonged activity without excessive fatigue. Casual runners may find they can complete their workouts with energy to spare, rather than struggling through the final stretch. This shift is not about creating super athletes; it is about empowering more individuals to participate in physical activities.

Even if you are not inclined to use a powered exoskeleton or are not eagerly awaiting the arrival of motorized shoes in 2028, the implications of this technology are significant. For those who experience discomfort during long walks or skip runs due to fatigue concerns, wearable robotics are designed with these challenges in mind. The aim is not to transform anyone into a super athlete but to make movement feel more achievable.

For some, this could translate to walking an extra mile effortlessly, while for others, it might mean keeping pace with friends or feeling more confident about starting a new fitness routine. Wearable robotics are reshaping the conversation around fitness, shifting the focus from speed and performance to comfort and accessibility.

As wearable robotics continue to evolve, the question is not whether they will improve, but how society will choose to integrate them into daily life. If these technologies can help you walk and run with less strain, would you consider using them, or would you prefer to rely solely on your own efforts? This is a conversation worth having as we navigate the future of movement.

According to Fox News, the potential of wearable robotics to enhance everyday mobility is becoming increasingly clear.

Bill Gates to Meet Andhra Pradesh Chief Minister for Strategic Talks

Bill Gates is set to visit Amaravati, Andhra Pradesh, for strategic discussions with Chief Minister N. Chandrababu Naidu, focusing on health and artificial intelligence.

In a significant development highlighting the intersection of technology and governance, Bill Gates, co-founder of Microsoft and a prominent figure in the tech industry, is scheduled to visit Amaravati, the capital of Andhra Pradesh. His meeting with Chief Minister N. Chandrababu Naidu aims to explore opportunities for expanding cooperation in two critical areas: health and artificial intelligence (AI).

This visit underscores Gates’s ongoing commitment to global health and technological advancement while showcasing Andhra Pradesh’s ambition to emerge as a leader in these fields. As India rapidly advances its digital infrastructure and technological capabilities, the country has become a focal point for tech giants, thanks to its vast and diverse market.

Under Naidu’s leadership, Andhra Pradesh has been proactive in leveraging technology to enhance governance and public welfare. Naidu, often recognized as a tech-savvy leader, has played a crucial role in driving digital initiatives across the state, which include e-governance and smart city projects.

The discussions between Gates and Naidu are expected to focus on how AI can be utilized to improve healthcare delivery in the state. India faces numerous healthcare challenges, including a shortage of medical professionals and inadequate infrastructure, particularly in rural areas. AI holds the potential to address some of these issues by facilitating remote diagnostics, predictive analytics for disease outbreaks, and personalized medicine.

Gates’s insights, supported by the resources of the Bill & Melinda Gates Foundation, could be instrumental in developing solutions tailored to the specific needs of Andhra Pradesh. The meeting is also likely to explore collaborative projects that align with the Gates Foundation’s focus on global health issues, such as eradicating infectious diseases and enhancing maternal and child health.

Andhra Pradesh could serve as a pilot region for innovative health interventions that, if successful, might be scaled across India and other developing regions. Gates’s interest in AI aligns with a broader global trend, where technology is increasingly recognized as a catalyst for economic and social development.

AI, in particular, has the potential to revolutionize various sectors, from agriculture to education, offering unprecedented opportunities for growth and efficiency. For Andhra Pradesh, embracing AI could lead to improved agricultural productivity, enhanced educational outcomes, and more efficient public services.

This visit also reflects a symbiotic relationship between global tech leaders and regional governments. As tech companies seek to expand their presence in emerging markets, they find willing partners in governments eager to harness technology for development. This partnership is mutually beneficial: tech companies gain access to new markets and data, while governments receive the technological expertise and investment necessary to drive growth.

In conclusion, Bill Gates’s visit to Andhra Pradesh represents more than just a high-profile meeting. It symbolizes the potential for technology to transform societies and underscores the importance of strategic partnerships in realizing this potential. As Andhra Pradesh continues its journey toward becoming a tech-driven state, the insights and collaboration from Gates and his foundation could play a pivotal role in shaping its future. Both Gates and Naidu share a vision of leveraging technology for the greater good, and this meeting may mark a significant step toward achieving that vision.

According to GlobalNetNews.

AI Summit Sees Strong Attendance on Opening Day

The AI Summit in New Delhi attracted a significant crowd on its opening day, showcasing India’s growing role in the global artificial intelligence landscape.

The bustling metropolis of New Delhi, renowned for its vibrant culture and historic landmarks, has added another highlight to its profile by hosting the much-anticipated AI Summit. On its opening day, the conference drew an impressive crowd, reflecting the increasing interest and investment in artificial intelligence across India. The event served as a melting pot of innovation and collaboration, underscoring India’s expanding prowess in the AI sector.

India, with its vast pool of tech-savvy talent and a rapidly digitizing economy, has emerged as a formidable player in the global AI arena. The summit, held at the expansive Pragati Maidan, showcased this evolution. Attendees, ranging from industry leaders to tech enthusiasts, were greeted with a plethora of exhibits that highlighted the country’s advancements in AI technologies.

The significance of the summit extends beyond the impressive turnout. It marks a pivotal moment in India’s technological journey, as the nation seeks to position itself as a global hub for AI development. With a government eager to foster innovation and a private sector keen to capitalize on AI’s potential, the summit serves as a platform to bridge these ambitions. It is a space where ideas are exchanged, collaborations are forged, and future pathways are charted.

The opening day featured keynote speeches from prominent figures in the tech industry, both domestic and international. These speeches set the tone for the event, emphasizing the transformative potential of AI across various sectors, including healthcare, agriculture, finance, and education. The narrative was clear: AI is not merely a technological advancement but a powerful tool for societal change.

However, India’s AI journey is not without its challenges. As the country embraces this technology, it must navigate issues related to data privacy, ethical AI deployment, and the digital divide. The summit’s robust agenda, which includes panel discussions and workshops on these critical topics, indicates a proactive approach to addressing these concerns.

The event also highlighted the role of startups in driving AI innovation. India’s startup ecosystem, one of the largest in the world, is a hotbed of AI-driven solutions. Many of these startups were present at the summit, showcasing cutting-edge technologies that promise to revolutionize industries. Their participation underscores the entrepreneurial spirit fueling India’s AI ambitions.

International participation at the summit further emphasizes India’s growing influence in the AI sector. Delegates from various countries attended, exploring opportunities for collaboration and investment. This international interest reflects India’s strategic importance in the global tech landscape, particularly as nations seek to diversify their tech partnerships.

The AI Summit is more than just an exhibition; it is a reflection of India’s aspirations and capabilities. As the world grapples with the implications of AI, India is positioning itself not just as a participant but as a leader in shaping the future of this technology. The massive turnout on day one is a testament to the excitement and interest surrounding India’s AI journey.

As the summit progresses, it will be intriguing to see how the dialogues and discussions unfold, particularly in areas such as AI ethics, policy-making, and international collaboration. The outcomes of these conversations could significantly influence the trajectory of AI development in India and beyond.

In conclusion, the AI Summit in New Delhi is a landmark event that highlights India’s commitment to embracing and leading in the AI revolution. It is a celebration of innovation, a forum for critical discussions, and a catalyst for future growth. As the summit continues, all eyes will be on New Delhi, eager to see what the next chapter in India’s AI story will bring, according to GlobalNetNews.

Dhireesha Kudithipudi Leads First U.S. Open-Access Neuromorphic Computing Hub

Dhireesha Kudithipudi is spearheading the first open-access neuromorphic computing hub in the U.S. at the University of Texas at San Antonio, aiming to democratize artificial intelligence research.

Indian American computer scientist Dhireesha Kudithipudi is transforming the landscape of artificial intelligence (AI) in the United States. As the founding director of the MATRIX AI Consortium at the University of Texas at San Antonio (UTSA), she is at the forefront of launching THOR: The Neuromorphic Commons, the first open-access hub of its kind in the country.

Funded by the National Science Foundation, the THOR project seeks to democratize access to neuromorphic computing, a field that emulates the architecture of the human brain to process information. Unlike traditional silicon chips, which consume significant amounts of electricity regardless of the task, neuromorphic systems operate on an “event-based” model, activating only when new data is detected.

“THOR is the U.S. national hub for neuromorphic computing,” Kudithipudi stated. She also holds the Robert F. McDermott Chair in Engineering at UTSA. “We are democratizing the technology, expanding industry-academia partnerships, and serving as a catalyst for bringing neuromorphic computing closer to real-world applications.”

Historically, access to such advanced hardware has been limited to elite corporate laboratories or well-funded academic institutions. In contrast, UTSA’s new initiative functions similarly to a public library, allowing researchers and students nationwide to apply for free access to run experiments. This approach significantly lowers the barrier to entry for the next generation of engineers.

At the core of the hub is the SpiNNaker2 system, a substantial platform featuring approximately 400,000 processing elements. Developed in collaboration with SpiNNcloud, this hardware utilizes energy-efficient ARM-based cores, akin to those found in smartphones, to simulate the pulsing signals of biological neurons and synapses.

The practical implications of this energy efficiency are profound. According to the research team, neuromorphic chips have the potential to revolutionize medical devices. For instance, they could enable pacemakers to adapt in real-time to a patient’s physical distress or allow hearing aids to intelligently filter background noise without quickly draining their batteries.

In addition to energy savings, Kudithipudi and her colleagues are addressing the issue of “catastrophic forgetting,” a common flaw in AI systems where machines lose previously acquired knowledge when learning new information. By mimicking the brain’s “lifelong learning” capabilities, THOR could facilitate the development of AI that evolves continuously.

This initiative involves a nationwide collaboration, with contributions from experts at UT Knoxville, UC San Diego, and Harvard University. The official launch of THOR is scheduled for February 23, marking a significant milestone for UTSA’s newly established College of AI, Cyber and Computing.

For Kudithipudi, the overarching goal is to ensure that the future of computing is not only more powerful but also more accessible and sustainable for all.

The information for this article was sourced from The American Bazaar.

OnPhase Appoints Indian-American Sudarshan Ranganath as Chief Product Officer

OnPhase has appointed Sudarshan Ranganath as Chief Product Officer to enhance its AI-driven financial automation platform amid the evolving needs of modern finance departments.

OnPhase, a key player in the AI-driven financial automation sector, has announced the appointment of Indian American executive Sudarshan Ranganath as its new Chief Product Officer. In this pivotal role, Ranganath will guide the company’s product vision and execution, with a focus on scaling its unified platform to address the dynamic requirements of contemporary finance departments.

Ranganath joins the Tampa-based company at a time when digital transformation is rapidly reshaping the office of the CFO. With over 20 years of experience in business spend management and digital payments, he brings a wealth of knowledge in developing intelligent, cloud-based solutions designed to simplify complex financial workflows. His appointment is viewed as a strategic move aimed at enhancing OnPhase’s market presence and accelerating the adoption of its automated payment technologies.

“I am thrilled to be joining OnPhase at such an exciting time,” Ranganath stated, highlighting the transformative impact of AI on finance teams. He pointed out that CFOs are increasingly pressured to deliver strategic insights while maintaining stringent operational controls. Ranganath believes that OnPhase’s unified platform is essential for eliminating friction and reducing manual errors in financial processes.

Before taking on this new role, Ranganath served as Senior Vice President of Product Management and Strategy at Corcentric. During his tenure, he played a crucial role in driving revenue growth through both organic innovation and strategic acquisitions. He is also recognized for developing an AI-centric trading partner network aimed at modernizing B2B commerce.

Ranganath’s career includes leadership positions at notable companies such as Ellucian, Rivermine, and VeriSign, where he concentrated on SaaS transformations and international expansion. His extensive background in accounts payable and payment software aligns seamlessly with OnPhase’s core value proposition, as emphasized by Robert Michlewicz, CEO of OnPhase.

“He has worked at the intersection of product strategy, technology, and customer outcomes,” Michlewicz remarked. “His leadership will be instrumental as we take our platform and our company to the next level.”

For over 25 years, OnPhase has provided organizations with comprehensive tools to manage the entire lifecycle of an invoice, from capture to final payment. By consolidating these functions into a single platform, the company aims to eliminate the data silos that often hinder traditional finance departments.

Currently recognized on both the Deloitte Technology Fast 500 and the Inc. 5000 lists, OnPhase continues to establish itself as a leader in empowering finance leaders to operate with greater clarity and confidence, according to The American Bazaar.

India Showcases Technological Innovations at AI Impact Summit 2026

India is hosting the AI Impact Summit 2026, gathering global tech leaders to explore the transformative potential of artificial intelligence across economies, governance, and society.

As artificial intelligence (AI) approaches a pivotal role in reshaping human civilization, India is welcoming a summit of global tech leaders to discuss its implications for economies, governance, and society. The five-day Artificial Intelligence Impact Summit 2026 commenced on Monday evening, with Prime Minister Narendra Modi inaugurating the India AI Impact Expo 2026 at Bharat Mandapam, the summit venue in New Delhi.

In a post on X, Modi emphasized the significance of the summit, stating, “This is proof that our nation is making rapid progress in the fields of science and technology and is contributing significantly to global development.” He further highlighted the potential and capabilities of India’s youth, underscoring the nation’s commitment to harnessing AI for human-centric progress.

The theme of the summit, ‘Sarvajana Hitaya, Sarvajana Sukhaya,’ translates to “welfare for all, happiness for all,” reflecting India’s dedication to utilizing AI for the benefit of all citizens. The first day featured a leadership session focused on harnessing AI for the future of learning and work, examining how AI is reshaping global employment and redefining necessary skills.

Another significant session addressed the transformation of India’s judicial ecosystem through AI. Experts discussed the technology’s potential to enhance efficiency, transparency, and accessibility within the judicial system. Additionally, the summit included discussions on culturally grounded AI and social norms, emphasizing that AI systems often fail not due to technical limitations but because they overlook essential social contexts.

The future of employability in the age of AI is a central theme, with experts exploring how AI may create new job opportunities while rendering some existing roles obsolete, necessitating large-scale workforce reskilling. A special session titled “Artificial Intelligence for Smart and Resilient Agriculture – From Research to Solutions” aimed to gather diverse perspectives on how AI can support sustainable, efficient, and climate-resilient agricultural practices.

This summit is notable as the first global AI summit of its kind to take place in the Global South. It aims to foster a future where AI’s transformative impact serves humanity, drives inclusive growth, and promotes people-centric innovations to protect the planet.

The groundwork for the summit included five rounds of public consultations and global outreach sessions held in cities such as Paris, Berlin, Oslo, New York, Geneva, Bangkok, and Tokyo. The summit is anchored in three guiding principles: the Sutras of People, Planet, and Progress, which frame how AI should serve humanity, safeguard the environment, and promote inclusive growth.

Prior to the New Delhi summit, a strategic pre-summit gathering took place in Washington, D.C., where policymakers, technologists, diplomats, and founders convened to discuss “Co-Creating the Future: Global South–Global North Collaboration for AI Impact.” This gathering reinforced the notion that AI discussions can no longer be geographically concentrated.

The New Delhi Summit aims to chart a path toward a future where AI’s transformative power serves humanity, fosters social development, and promotes innovations that protect the planet. It also seeks to amplify the voice of the Global South, ensuring that technological advancements and opportunities are shared broadly rather than concentrated in a few regions.

However, the rapid proliferation of AI across society presents urgent challenges, including disruptions to traditional employment patterns, exacerbation of biases, and increased energy consumption. These developments underscore the need to move beyond aspirational frameworks and deliver measurable, concrete impacts that address both the promises and perils of AI.

OpenAI CEO Sam Altman, ahead of the summit, noted India’s tech talent, national strategy, and optimism about AI’s potential, stating that the country possesses “all the ingredients to be a full-stack AI leader.” In an article for The Times of India, he outlined three priorities for collaboration: scaling AI literacy, building computing and energy infrastructure, and integrating AI into real workflows.

Altman expressed OpenAI’s commitment to partnering with the Indian government to make AI and its benefits accessible to more people across the country. “AI will help define India’s future, and India will help define AI’s future. And it will do so in a way only a democracy can,” he wrote.

The AI Impact Summit 2026 represents a significant milestone in the global conversation surrounding artificial intelligence, highlighting India’s role as a leader in the technology’s development and implementation.

According to The American Bazaar, the summit is set to pave the way for a future where AI’s transformative capabilities are harnessed for the greater good.

Android Malware Disguised as Fake Antivirus App Targets Users

Cybersecurity experts warn that a fake antivirus app named TrustBastion is using Hugging Face to distribute Android malware that can steal sensitive information from users’ devices.

Android users should be on high alert as cybersecurity researchers have identified a new threat involving a fake antivirus application called TrustBastion. This malicious app exploits Hugging Face, a widely used platform for sharing artificial intelligence (AI) tools, to deliver dangerous malware that can capture screenshots, steal personal identification numbers (PINs), and display fraudulent login screens.

The TrustBastion app initially presents itself as a helpful security tool, claiming to offer virus protection, phishing defense, and malware blocking. However, once installed, it quickly reveals its true nature. The app falsely alerts users that their device is infected, prompting them to install an update that actually delivers the malware. This tactic, known as scareware, preys on users’ fears and encourages them to act without thinking.

According to Bitdefender, a global cybersecurity firm, the campaign surrounding TrustBastion is particularly concerning due to its deceptive nature. Victims are often misled by ads or warnings suggesting their devices are compromised, leading them to manually download the app. The attackers cleverly hosted TrustBastion’s APK files on Hugging Face, embedding them within seemingly legitimate public datasets, which allowed the malicious code to go unnoticed.

Once installed, TrustBastion immediately prompts users to download a “required update,” which is when the actual malware is introduced. Despite researchers reporting the malicious repository, Bitdefender noted that similar repositories quickly reemerged, often with minor cosmetic changes but maintaining the same harmful functionality. This rapid re-creation complicates efforts to fully eliminate the threat.

The malware associated with TrustBastion is invasive and poses significant risks. Bitdefender reports that it can take screenshots, display fake login screens for financial services, and capture users’ lock screen PINs. The stolen data is then transmitted to a third-party server, allowing attackers to drain bank accounts or lock users out of their devices.

Google has reassured users that those who stick to official app stores are generally protected against this type of malware. A Google spokesperson stated, “Based on our current detection, no apps containing this malware are found on Google Play.” Google Play Protect, which is enabled by default on Android devices with Google Play Services, helps safeguard users by warning them about or blocking apps known to exhibit malicious behavior, even if they originate from outside the Play Store.

This incident serves as a stark reminder of the importance of cautious app downloading practices. Users are advised to only download applications from reputable sources, such as the Google Play Store or the Samsung Galaxy Store, which have moderation and scanning processes in place. It is also crucial to scrutinize app ratings, download counts, and recent reviews, as fake security apps often garner vague feedback or experience sudden rating spikes.

Even the most vigilant users can fall victim to data exposure. Utilizing a data removal service can help eliminate personal information, such as phone numbers and email addresses, from data broker sites that criminals exploit. While no service can guarantee complete data removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and reducing the risk of follow-up scams and account takeovers.

To further enhance security, users should regularly scan their devices with Google Play Protect and consider backing up their protection with robust antivirus software. Although Google Play Protect automatically removes known malware, it is not infallible. Historically, it has not always been 100% effective in eliminating all malware from Android devices.

To safeguard against malicious links that could install malware and compromise personal information, users should ensure they have strong antivirus software installed across all devices. This software can also help detect phishing emails and ransomware, protecting personal information and digital assets.

Additionally, users should avoid installing apps from websites outside of official app stores, as these apps bypass essential security checks. It is vital to verify the publisher name and URL before downloading any application. Enabling two-step verification (2FA) and using strong, unique passwords stored in a password manager can also help prevent account takeovers.

Finally, users should remain cautious about granting accessibility permissions, as malware often exploits these to gain control over devices. This incident illustrates how quickly trust can be weaponized, with a platform designed for advancing AI research being repurposed to distribute malware. A fake antivirus app has become the very threat it claims to protect against, underscoring the need for users to scrutinize even seemingly trustworthy applications.

For those who have encountered suspicious activity on their devices, sharing experiences can help raise awareness. Users are encouraged to report their findings and concerns to relevant platforms.

According to Bitdefender, staying informed and cautious is the best defense against evolving cyber threats.

Astronauts Arrive at ISS for Eight-Month Mission Following Medical Emergency

Four astronauts arrived at the International Space Station for an eight-month mission, following an early evacuation due to a medical emergency last month.

Four new astronauts arrived at the International Space Station (ISS) on Saturday, restoring the lab to full capacity after a medical emergency forced an early evacuation of several crew members last month. The international crew, which includes NASA Commander Jessica Meir, launched from Cape Canaveral in a SpaceX rocket on Friday, embarking on a journey that lasted approximately 34 hours.

“That was quite the ride,” Meir remarked shortly after the launch, as reported by BBC News. “We have left the Earth, but the Earth has not left us.” The launch had faced delays due to weather concerns prior to takeoff.

Joining Meir for the next eight to nine months aboard the ISS are NASA astronaut Jack Hathaway, France’s Sophie Adenot, and Russian cosmonaut Andrei Fedyaev. Both Meir and Fedyaev have previous experience aboard the ISS, with Meir notably participating in the first all-female spacewalk in 2019. Adenot, a military helicopter pilot, is only the second French woman to travel to space, while Hathaway serves as a captain in the U.S. Navy.

NASA reported that the spacecraft is set to autonomously dock with the space station’s Harmony module at 3:15 p.m. CT on Saturday, traveling at a speed of 17,000 mph in Earth orbit. “What an absolutely wonderful start to the day,” said NASA Administrator Jared Isaacman following the launch. “This mission has shown in many ways what it means to be mission-focused at NASA.”

Isaacman also highlighted the recent adjustments made by NASA, including the early return of Crew-11 and the expedited launch of Crew-12, all while preparing for the upcoming Artemis 2 mission, which is scheduled to begin in early March.

This mission marks the 12th crew rotation with SpaceX as part of NASA’s Commercial Crew Program. Crew-12 will engage in scientific investigations and technology demonstrations aimed at preparing humans for future exploration missions to the Moon and Mars, as well as providing benefits for people on Earth.

After docking, the capsule’s hatch opened at 4:14 p.m. CT, allowing the crew to enter the space station. “We are so excited to be here and get to work,” Meir expressed upon arrival. Adenot added, “The first time we looked at the Earth was mind-blowing. … We saw no lines, no borders.”

Prior to the arrival of the new crew, only one American and two Russians remained at the space station, ensuring its continued operation. The medical evacuation that took place in January was the first of its kind in 65 years, as NASA reported that a crew member experienced a serious health issue. The agency has not disclosed the nature of the medical condition or the identity of the astronaut involved, citing medical privacy.

The astronaut who faced the medical emergency, along with three other crew members who had launched with them, returned to Earth more than a month earlier than planned after the decision was made to bring them home.

According to the Associated Press, the successful arrival of the new crew marks a significant step forward for ongoing research and exploration efforts aboard the ISS.

Superhealth Launches SuperOS, Claims First Agentic AI Hospital

Superhealth has introduced SuperOS, touted as the world’s first agentic AI operating system designed to manage hospital operations entirely, marking a significant advancement in healthcare automation in India.

Superhealth has launched what it claims to be the world’s first agentic AI operating system, named SuperOS, designed to manage a hospital from end to end. This initiative positions India as a potential leader in large-scale healthcare automation.

SuperOS is crafted as a comprehensive system that integrates nearly every aspect of hospital operations. According to the company, it encompasses everything from outpatient consultations and diagnostics to surgical workflows and discharge summaries. Varun Dubey, the founder of Superhealth, emphasized the platform’s capabilities, stating, “SuperOS is the world’s first agentic AI operating system built to actually run a hospital, from clinical decisions to operations, from labs to discharge, from OT assignments to auto prescriptions, it does it all.”

Dubey further explained that SuperOS understands the needs of doctors, nurses, and patients, as well as 15 Indian languages. The system orchestrates outcomes by facilitating real-time interactions between human staff and AI agents. “Only Superhealth could build this, because we are the only full-stack provider that designs, builds, and operates hospitals while also developing all the technology that runs them,” he added. “This is not software that merely assists healthcare. This is technology that operates healthcare.”

The introduction of SuperOS places Superhealth in the midst of global discussions about integrating AI into hospital systems. While many healthcare facilities are exploring AI tools for specific tasks, Superhealth is marketing SuperOS as a unified operating layer that connects clinical and administrative functions in real time.

According to the company, SuperOS serves as an intelligent framework across the hospital, coordinating tasks between AI agents and human teams. In outpatient departments, it acts as an ambient clinical co-pilot, providing patient history, assisting with differential diagnoses, drafting prescriptions for physician approval, and coordinating with lab technicians and pharmacists directly in the consultation room. The aim is to reduce wait times and enhance meaningful interactions between doctors and patients.

SuperOS is also integrated into radiology and pathology workflows. The platform replaces traditional Picture Archiving and Communication Systems (PACS) with cloud-based imaging systems and employs instant 3D volumetric analysis to aid in the detection of conditions in neurology, orthopaedics, chest trauma, and oncology. Superhealth claims that this integration reduces reporting time by 30 percent and effectively triples the capacity of specialists.

For inpatient and surgical care, SuperOS coordinates operating rooms, surgeons, and recovery workflows. It continuously monitors patients in both regular and intensive care units, utilizing personalized alerts, automating discharge summaries through a feature dubbed “Magic Discharge,” and conducting real-time audits of all clinical interactions to enhance medical quality.

Dubey framed the launch of SuperOS as part of a broader national ambition, stating, “India has a unique opportunity to show the world what real, meaningful healthcare AI looks like. SuperOS is built in India, for India, using Indian clinical data. It is also deployed in India and is focused on solving problems that matter to our country and our people.”

Superhealth is working to establish a network of 100 hospitals, supported by full-time senior clinicians, advanced infrastructure, and a zero-commission business model aimed at transparency and simplicity. Central to this expansion is SuperOS, which the company describes as operating seamlessly alongside healthcare professionals while enhancing efficiency across consultations, diagnostics, surgery, pharmacy, and recovery.

As hospitals worldwide face challenges such as staffing shortages, rising costs, and burnout, Superhealth is making a bold assertion that an AI-native operating system can transition from merely assisting care to actively managing it. The scalability of this model beyond India will be closely monitored by healthcare systems in the United States and other countries.

According to The American Bazaar, the implications of SuperOS could reshape the landscape of hospital management and patient care, setting a precedent for future innovations in healthcare technology.

Instagram Chief Defends App Design Amid Youth Mental Health Lawsuit

Adam Mosseri, head of Instagram, testified in a California trial addressing the platform’s impact on youth mental health, defending its design against claims of addiction and negligence.

Adam Mosseri, the head of Instagram, took the witness stand on Wednesday in a pivotal trial in Los Angeles that could significantly influence how Silicon Valley addresses the mental health of its youngest users.

During his testimony, Mosseri defended Instagram against allegations that the platform was intentionally designed to be addictive, particularly among young users, contributing to a mental health crisis among adolescents. The case was brought forth by a 20-year-old woman from California, identified as Kayle, who argued that the app’s “endless scroll” feature and instant gratification elements led to years of depression and body dysmorphia from an early age.

In response to the term “addiction,” Mosseri reframed the discussion, describing it as “problematic use” that varies from individual to individual. He also addressed internal communications from 2019 concerning face-altering “plastic surgery” filters. While some teams within the company raised concerns that these tools could harm the self-esteem of teenage girls, Mosseri and Meta CEO Mark Zuckerberg initially considered lifting a ban on such filters to promote user growth. Ultimately, the company decided to maintain the ban on filters that overtly promote cosmetic surgery.

“I was trying to balance all the different considerations,” Mosseri told the jury, according to reports from the courtroom.

Several parents who have lost children to the adverse effects of social media were present in the courtroom, sharing their grief as part of the ongoing case. Victoria Hinks, whose daughter died by suicide at the age of 16, stated that their children had become “collateral damage” in Silicon Valley’s “move fast and break things” culture. Outside the courthouse, she remarked, “Our children were the first guinea pigs,” a sentiment that Mosseri countered during his testimony by asserting that the “move fast and break things” motto, originally coined by Zuckerberg, is no longer applicable.

The plaintiff’s attorney, Mark Lanier, argued that the platform operates like a “slot machine in a child’s pocket,” designed to exploit developing brains for profit. He contended that Meta was aware of the psychological toll its platform could take but prioritized user engagement over the well-being of its young audience.

This trial serves as a critical “bellwether” for over 1,500 similar lawsuits filed across the country. It also tests the boundaries of Section 230, the federal law that typically protects platforms from liability for user-generated content. If the jury finds Meta negligent in its product design, it could lead to significant financial repercussions and compel substantial changes to social media algorithms.

Meta maintains that it has implemented numerous safety features for teens, including parental controls and time limits. Zuckerberg is expected to testify later this month as the trial continues to explore the complex relationship between technology profits and the vulnerability of the teenage mind, according to American Bazaar.

Back-to-Back Founder Exits Shake Elon Musk’s xAI Team

Elon Musk’s xAI is facing significant leadership changes as two co-founders recently departed, raising concerns about the company’s stability amid ambitious plans and regulatory scrutiny.

Elon Musk’s xAI is currently navigating a challenging period, marked by the recent departures of two co-founders within just two days. This leadership churn comes at a time when expectations for the company are exceptionally high, as Musk continues to promote bold ambitions for the future of artificial intelligence.

In the latest development, influential AI researcher Jimmy Ba announced his exit from xAI on Tuesday. In a post on X, Ba expressed gratitude for his early involvement, stating he was “grateful to have helped cofound at the start.” His departure follows that of fellow co-founder Tony Wu, who revealed his resignation just one day earlier.

The timing of these resignations is particularly notable, as they occurred shortly after xAI was merged with Musk’s aerospace company, SpaceX, earlier this month. This merger is reportedly part of SpaceX’s preparations for a public listing later this year.

Ba, who is a professor at the University of Toronto, played a significant role in developing research that informed xAI’s Grok 4 models. His exit adds to a growing list of senior departures from the startup, which has now seen six of its original twelve founders leave, five of them within the past year.

Other co-founders, including Igor Babuschkin, Kyle Kosic, and Christian Szegedy, have also exited the company. Additionally, Greg Yang announced last month that he would be scaling back his involvement to focus on his health, specifically dealing with Lyme disease.

The merger between xAI and SpaceX was structured as an all-stock transaction, valuing SpaceX at $1 trillion and xAI at $250 billion, according to documents cited by CNBC. Earlier, in March 2025, Musk utilized xAI in a separate all-stock deal to acquire his social media platform, X.

These leadership changes come amid increasing regulatory scrutiny for xAI in various regions, including Europe, Asia, and the United States. Investigations were initiated after xAI’s Grok chatbot and image generation tools were found to facilitate the large-scale creation and distribution of non-consensual explicit content, commonly referred to as deepfake pornography. This material included images of real individuals, including minors, raising alarms among regulators across multiple jurisdictions.

Musk founded xAI in 2023 with a team of 11 others, positioning the company as a competitor to OpenAI and Google in the rapidly evolving AI landscape. At its inception, xAI stated its mission was to “understand the true nature of the universe,” setting an ambitious tone for what Musk envisioned as a transformative venture.

In response to the recent departures, Musk quickly convened an all-hands meeting with xAI staff on Tuesday night. This meeting aimed to reset the narrative and outline a sweeping vision for the company’s future. According to reports from The New York Times, Musk told employees that xAI would eventually require a manufacturing base on the moon. He proposed the idea of building AI-powered satellites there and launching them into space using a massive catapult. “You have to go to the moon,” Musk stated, as reported by The New York Times.

Musk suggested that establishing a presence on the moon would provide xAI with access to computing capacity far exceeding that of its competitors. He implied that such advancements could unlock forms of intelligence that are currently difficult to conceptualize. “It’s difficult to imagine what an intelligence of that scale would think about,” he added, “but it’s going to be incredibly exciting to see it happen.”

As the company grapples with these leadership changes, Musk appears determined to refocus attention on xAI’s ambitious goals, including the potential for a public listing. The recent exits of key figures underscore the challenges facing the company, but Musk’s vision for the future remains steadfast.

According to The New York Times, the ongoing developments at xAI highlight the complexities of managing a rapidly evolving tech startup in an increasingly scrutinized industry.

NASA Finalizes Strategy for Sustaining Human Presence in Space

NASA has finalized its strategy for maintaining a human presence in space, focusing on the transition from the International Space Station to future commercial platforms by 2030.

This week, NASA announced the completion of its strategy aimed at sustaining a human presence in space, particularly in light of the planned de-orbiting of the International Space Station (ISS) in 2030. The agency’s document underscores the necessity of ensuring extended stays in orbit following the retirement of the ISS.

“NASA’s Low Earth Orbit Microgravity Strategy will guide the agency toward the next generation of continuous human presence in orbit, enable greater economic growth, and maintain international partnerships,” the document states.

The commitment to this strategy comes amid concerns regarding the readiness of new space stations. With the incoming administration’s focus on budget cuts through the Department of Government Efficiency, there are apprehensions that NASA may face funding reductions.

“Just like everybody has to make hard decisions when the budget is tight, we’ve made some choices over the last year to cut back programs or cancel them altogether to ensure that we’re focused on our highest priorities,” said NASA Deputy Administrator Pam Melroy.

Commercial space company Voyager is actively developing one of the potential replacements for the ISS. The company has expressed support for NASA’s strategy to maintain a human presence in space. “We need that commitment because we have our investors saying, ‘Is the United States committed?’” stated Jeffrey Manber, Voyager’s president of international and space stations.

The initiative to maintain a permanent human presence in space dates back to President Reagan, who emphasized the importance of private partnerships in his 1984 State of the Union address. “America has always been greatest when we dared to be great. We can reach for greatness,” he said, highlighting the potential for the space transportation market to exceed national capabilities.

The ISS, which has been continuously occupied for 24 years, was launched in 1998 and has hosted over 28 astronauts from 23 countries. The Trump administration’s national space policy, released in 2020, called for a “continuous human presence in Earth orbit” and stressed the need to transition to commercial platforms—a policy that has been maintained by the Biden administration.

“Let’s say we didn’t have commercial stations that are ready to go. Technically, we could keep the space station going, but the idea was to fly it through 2030 and de-orbit it in 2031,” NASA Administrator Bill Nelson remarked in June.

Recent discussions have raised questions about the continuity of human presence in space. “I just want to talk about the elephant in the room for a moment, continuous human presence. What does that mean? Is it continuous heartbeat or continuous capability?” Melroy noted during the International Astronautical Congress in October.

NASA’s finalized strategy has taken into account the concerns of commercial and international partners regarding the potential loss of the ISS without a commercial station ready to take its place. “Almost all of our industry partners agreed. Continuous presence is continuous heartbeat. And so that’s where we stand,” Melroy explained. “I think this continuous presence, it’s leadership. Today, the United States leads in human spaceflight. The only other space station that will be in orbit when the ISS de-orbits, if we don’t bring a commercial destination up in time, will be the Chinese space station. We want to remain the partner of choice for our industry and for our goals for NASA.”

Three companies, including Voyager, are collaborating with NASA to develop commercial space stations. Axiom signed an agreement with NASA in 2020, while contracts were awarded to Nanoracks, now part of Voyager Space, and Blue Origin in 2021.

“We’ve had some challenges, to be perfectly honest with you. The budget caps that were a deal cut between the White House and Congress for fiscal years 2024 and 2025 have left us without as much investment,” Melroy acknowledged. “So, what we do is we co-invest with our commercial partners to do the development. I think we’re still able to make it happen before the end of 2030, though, to get a commercial space station up and running so that we have a continuous heartbeat of American astronauts on orbit.”

Voyager has stated that it is on track with its development timeline and plans to launch its starship space station in 2028. “We’re not asking for more money. We’re going ahead. We’re ready to replace the International Space Station,” Manber asserted. “Everyone knows SpaceX, but there are hundreds of companies that have created the space economy. If we lose permanent presence, you lose that supply chain.”

Additional funding has been allocated to the three companies since the initial space station contracts, and a second round of funding could be crucial for some projects. NASA may also consider funding new space station proposals, including concepts from Long Beach, California’s Vast Space, which recently unveiled plans for its Haven modules, aiming to launch Haven-1 as soon as next year.

“We absolutely think competition is critical. This is a development project. It’s challenging. It was hard to build the space station. We’re asking our commercial partners to step up and do this themselves with some help from us. We think it’s really important that we carry as many options going forward to see which one really pans out when we actually get there,” Melroy concluded.

According to Fox News, NASA’s strategy reflects a commitment to ensuring a sustainable human presence in space as the agency navigates the transition from the ISS to future commercial platforms.

Microsoft ‘Important Mail’ Email Scam: How to Identify It

Scammers are increasingly impersonating Microsoft, sending deceptive emails that threaten account access to trick victims into clicking malicious links.

Scammers are becoming more sophisticated in their tactics, particularly when it comes to impersonating reputable companies like Microsoft. Recently, a fraudulent email claiming to be an urgent warning about email account access has raised alarms among users.

The email appears serious and time-sensitive, which is a common strategy used by scammers to provoke immediate action. A concerned individual named Lily reached out for assistance, expressing uncertainty about the validity of the message she received. She attached screenshots of the email, hoping for guidance.

It is crucial to note that this email is not from Microsoft; it is a scam designed to rush individuals into clicking dangerous links. The urgency of the message is a red flag that should not be ignored.

Upon closer inspection, several warning signs indicate that the email is fraudulent. For instance, it begins with a generic greeting, “Dear User,” rather than addressing the recipient by name, which is a standard practice for legitimate Microsoft communications.

The email claims that the recipient’s email access will be suspended on February 5, 2026. Scammers often exploit fear and urgency to cloud judgment and prompt hasty decisions.

Additionally, the email originates from an AOL address (accountsettinghelp20@aol.com), which is another significant indicator of its illegitimacy. Microsoft does not send security notifications from AOL or any other third-party email service.

Another alarming feature of the email is the phrase “PROCEED HERE,” which is designed to incite quick clicks. Legitimate Microsoft communications will always direct users to clearly labeled Microsoft.com pages.

Moreover, the email contains phrases like “© 2026 All rights reserved,” which scammers often copy and paste to create a false sense of authenticity. Genuine Microsoft account alerts do not include image attachments, making this another major warning sign.

If a recipient were to click on the link provided in the email, they would likely be redirected to a counterfeit Microsoft login page. This is a tactic used by attackers to steal personal information, including email credentials, which can lead to further scams and identity theft.

To protect yourself from such scams, it is essential to take a cautious approach when encountering suspicious emails. Here are some steps to consider:

First, do not click on any links, buttons, or images in the email. Avoid replying to the message, and be cautious even when opening attachments, as they can trigger malware or tracking mechanisms.

Ensure that you have strong antivirus software installed and that it is up to date. This software can help block phishing attempts, scan attachments, and alert you to dangerous links before any damage occurs.

If you receive an email like this, report it and delete it from your inbox. There is no reason to keep it, even in your trash folder.

For peace of mind, open a new browser window and navigate directly to the official Microsoft account website. Sign in as you normally would; if there is a legitimate issue, it will be displayed there.

If you accidentally clicked on any links or entered your information, change your Microsoft password immediately. Use a strong, unique password that you do not use elsewhere. A password manager can help generate and securely store your passwords.

Additionally, check if your email has been exposed in previous data breaches. Some password managers include built-in breach scanners that can alert you if your email address or passwords have appeared in known leaks. If you find a match, change any reused passwords and secure those accounts with new, unique credentials.

Enabling two-factor authentication (2FA) for your Microsoft account adds an extra layer of security, making it more difficult for attackers to gain access even if they have your password.

Scammers often gather information about potential targets through data broker sites. Using a data removal service can help minimize the amount of personal information available online, reducing your vulnerability to phishing attempts.

While no service can guarantee complete removal of your data from the internet, a data removal service can effectively monitor and erase your personal information from numerous websites, providing peace of mind.

Utilize your email app’s built-in reporting tool to help train filters and protect other users from encountering the same scam.

When Microsoft genuinely needs your attention, the communication will look very different from these scams. Recognizing the contrast can make it easier to identify fraudulent messages.

Scammers rely on urgency to distract and manipulate individuals, especially when it comes to something as central to our lives as email. The good news is that taking a moment to pause and verify can make a significant difference.

Lily’s decision to seek help before acting was a wise move that could prevent identity theft and account takeovers. Remember, emails that threaten account shutdowns and demand immediate action are almost always illegitimate. When faced with urgency, take a step back, verify independently, and never let an email rush you into a mistake.

If you have encountered a fake Microsoft warning or a similar scam, share your experience with us at Cyberguy.com.

For more information on protecting yourself from scams, consider signing up for the free CyberGuy Report, which offers tech tips, urgent security alerts, and exclusive deals delivered directly to your inbox.

According to CyberGuy.com, staying informed and cautious is key to safeguarding your digital life.

Ring’s AI Search Party Aims to Locate Lost Dogs More Efficiently

Ring has launched its AI-powered Search Party feature nationwide, enabling users to leverage nearby cameras to quickly locate lost dogs, even if they do not own a Ring device.

Ring has expanded its AI-powered Search Party feature across the United States, allowing anyone to utilize nearby cameras to help locate lost dogs more efficiently.

Losing a dog can be a distressing experience, often leading to frantic searches around the neighborhood and constant refreshes of local social media groups in hopes of finding a clue. To alleviate some of this stress, Ring aims to transform entire communities into additional eyes through the power of artificial intelligence. The Search Party feature now enables users to tap into a network of outdoor cameras to spot missing pets, and for the first time, it is accessible to anyone, regardless of whether they own a Ring camera.

Search Party is designed as a community-driven tool that expedites the reunion of lost dogs with their families. When a user reports a missing dog in the Ring app, nearby outdoor Ring cameras utilize AI to scan recent footage for potential matches. If a possible match is identified, the camera owner receives an alert containing a photo of the lost dog and a video clip. They can then choose to either ignore the alert or assist in the search, ensuring that sharing remains optional and pressure is minimized.

This update marks a significant shift in the functionality of Search Party. Previously, only individuals with Ring devices could access this feature. Now, anyone in the U.S. can download the free Ring Neighbors app, register, and post a lost dog alert. This change allows dog owners to connect with an existing network of cameras without the need for additional hardware or subscription fees. Neighbors without cameras can also contribute by sharing alerts and keeping an eye out for sightings.

Lost pets are already one of the most common types of posts in the Ring Neighbors app, with over 1 million reports of lost or found pets shared last year. Given that approximately 60 million households in the U.S. own at least one dog, the potential impact of Search Party is substantial.

Getting started with Search Party is straightforward. Users can download the Ring app for free from the App Store or Google Play. Once registered, anyone can create a Lost Dog Post in the app. If the post meets the necessary criteria, the app guides users through the steps to activate Search Party. This process involves sharing photos and basic information about the missing dog, after which nearby cameras will begin scanning automatically.

Search Party alerts are temporary. When a user initiates a Search Party in the Ring app, it operates for a few hours. If the dog remains missing, the user must renew the Search Party or start a new one to ensure that nearby cameras continue their search for matches. Once the dog is found, users can update their post to inform the community that the search is over.

The AI technology behind Search Party aims to reunite lost dogs with their owners efficiently. If an outdoor Ring camera detects a potential match, the camera owner is notified with an alert that includes a photo of the missing dog and a video clip. The camera owner retains control throughout the process, deciding whether to share footage or contact the owner through the app, all while keeping their phone number private.

Ring reports that Search Party has already yielded impressive results. In one instance, a woman named Kylee from Wichita, Kansas, was reunited with her mixed-breed dog, Nyx, just 15 minutes after he escaped through a small hole in her backyard fence. A neighbor’s Ring camera captured footage of Nyx and shared it through the app, providing Kylee with her only lead. “I was blown away,” Kylee said, emphasizing that even dogs with microchips can go unrecognized if they lack a collar. She credits the shared video for Nyx’s swift return, stating that she likely would not have found him without the Ring app.

Nyx is not the only success story. Ring claims that Search Party has facilitated the reunion of more than one lost dog per day, including pets like Xochitl in Houston, Truffle in Bakersfield, Lainey in Surprise, Zola in Ellenwood, Toby in Las Vegas, Blu in Erlanger, Zeus in Chicago, and Coco in Stockton, with more reunions occurring daily.

Search Party remains an optional feature that users can enable or disable at any time within the Ring app. Alongside this expansion, Ring has committed $1 million to equip animal shelters with camera systems, aiming to support up to 4,000 shelters across the United States. By integrating shelters into the network, Ring hopes to facilitate faster reconnections between dogs picked up by shelters and their owners. The company is also collaborating with organizations like Petco Love and Best Friends Animal Society and is open to additional partnerships.

Despite its benefits, the launch of Search Party last fall faced some criticism, particularly regarding privacy concerns and Ring’s connections to law enforcement. Ring maintains that participation is voluntary and that sharing footage is optional. However, the feature is enabled by default for compatible outdoor cameras, which has raised eyebrows. Nevertheless, the company appears confident in its offering and is actively promoting Search Party, even featuring it in a Super Bowl commercial.

Search Party taps into a familiar concept of neighbors helping one another during a challenging time. By making this feature available to everyone, Ring has removed a significant barrier, increasing the likelihood of quick reunions. Whether this tool becomes a community staple or ignites further privacy discussions will depend on how it is utilized by the public.

Would you be comfortable with neighborhood cameras assisting in the search for your lost dog, or does that raise concerns about surveillance? Share your thoughts with us at Cyberguy.com.

According to Fox News, the Search Party feature represents a significant advancement in community-driven pet recovery efforts.

SoundCloud Data Breach Affects Nearly 30 Million User Accounts

SoundCloud has confirmed a data breach affecting approximately 29.8 million user accounts, exposing email addresses and profile information to hackers and leaving many users unable to access their accounts.

SoundCloud, one of the world’s largest audio platforms, has reported a significant data breach that has compromised the personal and contact information of approximately 29.8 million users. This incident has left many affected users locked out of their accounts, encountering error messages when attempting to log in.

Founded in 2007, SoundCloud has grown into a prominent service for artists, hosting over 400 million tracks from more than 40 million creators. The scale of this breach raises serious concerns about user security. The company detected unauthorized activity linked to an internal service dashboard, prompting the initiation of its incident response process. Users began experiencing 403 Forbidden errors, particularly when connecting through virtual private networks (VPNs).

Initially, SoundCloud stated that the attackers accessed limited data and did not compromise passwords or financial information. The company claimed that the exposed information consisted of data that users had already made public on their profiles. However, subsequent disclosures revealed a more alarming situation.

According to the data breach notification service Have I Been Pwned, the attackers managed to harvest data from around 29.8 million accounts. Although no passwords were taken, the exposure of email addresses linked to public profiles poses a significant risk. This combination can facilitate phishing attempts, impersonation, and targeted scams.

Security researchers have linked the breach to ShinyHunters, a notorious extortion gang. Sources informed BleepingComputer that the group attempted to extort SoundCloud following the breach. SoundCloud confirmed these claims, stating that attackers made demands and launched email-flooding campaigns aimed at harassing users, employees, and partners. ShinyHunters has also claimed responsibility for recent voice phishing attacks targeting single sign-on systems at major companies such as Okta, Microsoft, and Google.

While the breach may seem less severe than those involving passwords or credit card information, this assumption can be misleading. Email addresses associated with real profiles enable scammers to craft convincing messages, posing as SoundCloud, brands, or even other creators. With access to follower counts and usernames, these messages can appear personal and credible. Once attackers gain the trust of their targets, they can push malicious links, malware, or fake login pages, often leading to larger account takeovers.

SoundCloud has not disclosed whether further details will be made available. The company confirmed the attack and the extortion attempt but has not responded to follow-up inquiries regarding the breach’s scope or its internal controls. For users, the long-term risk lies in how widely this dataset may spread. Once exposed, data rarely disappears and can circulate across forums, marketplaces, and scam networks for years.

In response to the breach, a SoundCloud representative stated, “We are aware that a threat actor group has published data online allegedly taken from our organization. Please know that our security team—supported by leading third-party cybersecurity experts—is actively reviewing the claim and published data.” The company has reiterated that it has found no evidence of sensitive data, such as passwords or financial information, being accessed.

For those with SoundCloud accounts, it is crucial to take immediate action. Even limited data exposure can lead to targeted scams if ignored. Users should be vigilant and monitor their inboxes for messages related to SoundCloud, music uploads, copyright issues, or account warnings. It is advisable not to click on links or open attachments from unexpected emails. When in doubt, users should visit the official website directly instead of using email links. Additionally, employing strong antivirus software can provide an extra layer of protection.

While passwords were not exposed, changing them is still a prudent measure. Users should create new passwords that are unique and not reused across other platforms. For those who struggle to remember passwords, utilizing a password manager can help generate and securely store strong passwords, thereby reducing the risk of reuse.

Furthermore, users should check if their email addresses have been involved in past breaches. Many password managers include built-in breach scanners that can alert users if their email addresses or passwords have appeared in known leaks. If a match is found, it is essential to change any reused passwords and secure those accounts with new, unique credentials.

Implementing two-factor authentication (2FA) adds an important security layer in case someone attempts to access an account. Even if attackers manage to guess or obtain a password, they will still require a second verification step. Users should enable 2FA wherever SoundCloud or connected services offer it.

After most breaches, attackers often use exposed email addresses to test logins across various streaming services, social media, and shopping accounts. Users should be on the lookout for password reset emails they did not request or login alerts from unfamiliar locations. If anything seems suspicious, it is vital to act quickly.

The SoundCloud breach serves as a reminder that data breaches can have far-reaching consequences, even when the exposed information appears harmless. Public profile data combined with private contact details creates real exposure. Staying alert, limiting data sharing, and adopting strong security practices remain the best defenses as data breaches continue to escalate.

For further information and updates on this situation, users are encouraged to stay informed and proactive in protecting their online presence, especially in light of the evolving landscape of cyber threats. According to Have I Been Pwned, vigilance is key in safeguarding personal information.

Newly Discovered Asteroid Identified as Tesla Roadster in Space

Astronomers recently misidentified a Tesla Roadster launched into space by SpaceX in 2018 as an asteroid, prompting a swift correction from the Minor Planet Center.

Astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics in Massachusetts recently made an amusing error when they mistook a Tesla Roadster for an asteroid. This incident occurred earlier this month, nearly seven years after the car was launched into orbit by SpaceX CEO Elon Musk.

The object, initially designated as 2018 CN41, was registered by the Minor Planet Center but was deleted from the registry just one day later on January 3. The center clarified that the object’s orbit matched that of an artificial object, specifically the Falcon Heavy upper stage with the Tesla Roadster attached. In a statement on their website, they noted, “The designation 2018 CN41 is being deleted and will be listed as omitted.”

The Tesla Roadster was launched during the maiden flight of SpaceX’s Falcon Heavy rocket in February 2018. At the time, it was expected to enter an elliptical orbit around the sun, extending just beyond Mars before looping back toward Earth. However, Musk later indicated that the vehicle exceeded Mars’ orbit and continued on toward the asteroid belt.

When the Roadster was misidentified as an asteroid earlier this month, it was located less than 150,000 miles from Earth—closer than the moon’s orbit. This proximity raised concerns among astronomers, who felt it necessary to monitor the object closely.

Jonathan McDowell, an astrophysicist at the Center for Astrophysics, commented on the implications of this mix-up. He pointed out the challenges associated with untracked objects in space, stating, “Worst case, you spend a billion launching a space probe to study an asteroid and only realize it’s not an asteroid when you get there,” highlighting the potential risks of misidentification.

The incident serves as a reminder of the complexities involved in tracking artificial objects in space, especially as more private companies like SpaceX continue to launch vehicles into orbit.

Fox News Digital has reached out to SpaceX for further comment regarding the incident.

According to Astronomy Magazine, the mix-up illustrates the ongoing challenges in space observation and the importance of accurate tracking systems as the number of objects in orbit continues to grow.

Qualcomm Completes 2 nm Chip Design at Indian Centers

Qualcomm Technologies has achieved a significant milestone by completing the tape-out of its 2nm semiconductor design, showcasing India’s growing role in advanced chip design.

Qualcomm Technologies, a leading American chipmaker, has announced the successful tape-out of its 2nm semiconductor design. This achievement marks a pivotal moment in advanced chip design and highlights India’s rapidly expanding semiconductor ecosystem.

The breakthrough was developed with substantial contributions from Qualcomm’s engineering centers located in Bengaluru, Chennai, and Hyderabad. This accomplishment reinforces India’s emerging status as a global hub for cutting-edge chip design.

According to Qualcomm, this milestone reflects the depth of its engineering presence in India, which has become the company’s largest engineering footprint outside the United States. The achievement underscores India’s expanding role in advanced semiconductor innovation.

The milestone was showcased at Qualcomm’s facility in Bengaluru during a visit from Ashwini Vaishnaw, the Indian Minister for Railways, Information and Broadcasting, and Electronics and IT. Vaishnaw remarked that “India is increasingly at the center of how advanced semiconductor technologies are being designed for the future.” He described the development as a testament to the growing maturity of the country’s design ecosystem and its ambition to establish a globally competitive semiconductor industry.

Qualcomm has invested in India for over two decades, building extensive capabilities in wireless technology, computing, artificial intelligence, and system-level engineering. The company’s teams in India contribute to various aspects of design implementation, validation, AI optimization, and system integration, supporting global architecture and platforms that power billions of devices worldwide.

Amitesh Kumar Sinha, Additional Secretary at the Ministry of Electronics and IT and CEO of the India Semiconductor Mission, stated, “India’s Semiconductor Mission is progressing with strong momentum, supported by a strengthening design ecosystem and sustained industry participation.” He emphasized that investments in advanced engineering and research and development capabilities are crucial for building long-term semiconductor capacity in the country.

Sinha further noted that Qualcomm’s long-term commitment to India reflects the growing depth of the country’s semiconductor design ecosystem and contributes to India’s broader ambition of becoming a globally competitive hub for semiconductor innovation.

Srini Maddali, Senior Vice President of Engineering at Qualcomm India, described the 2nm tape-out as a validation of the engineering talent available in the country. “Working closely with global program and architecture teams on advanced semiconductor design requires the very best talent, and our India teams consistently deliver at a global standard,” he said.

Qualcomm’s research and development centers in India now contribute across multiple layers of system design, from architecture to software platforms and AI-driven use-case optimization. This is particularly critical in an era characterized by intelligent and connected systems.

The successful tape-out of the 2nm chip design comes at a time when India is intensifying its efforts to position itself as a global semiconductor hub. This initiative is supported by policy measures, ecosystem incentives, and industry partnerships. Qualcomm’s latest milestone adds momentum to this push, signaling that India is not just assembling chips for the world but is increasingly involved in designing the future of semiconductor technology.

Qualcomm, headquartered in San Diego, has maintained a presence in India for over 20 years, during which it has developed one of its largest engineering capabilities outside the United States. This long-standing investment underscores the company’s commitment to fostering innovation and development within India’s semiconductor landscape.

The developments at Qualcomm highlight the potential for India to become a key player in the global semiconductor industry, as the nation continues to build its capabilities and attract investment.

According to The American Bazaar, Qualcomm’s advancements are a significant step forward for India’s semiconductor ambitions.

Fox News AI Newsletter Claims Misinformation About Artificial Intelligence

The Fox News AI Newsletter highlights concerns over misinformation regarding artificial intelligence, job displacement, and the implications of AI on society and the economy.

The Fox News AI Newsletter provides readers with the latest advancements in artificial intelligence technology, exploring both the challenges and opportunities that AI presents in today’s world.

In a recent op-ed, Shyam Sankar, the chief technology officer of Palantir Technologies, asserted that “the American people are being lied to about AI.” He emphasized that one of the most significant misconceptions is the belief that artificial intelligence will lead to widespread job displacement for American workers.

Elon Musk, the billionaire entrepreneur, has stirred controversy by suggesting that individuals should not prioritize retirement savings due to the transformative potential of AI. Musk claims that advancements in artificial intelligence could render traditional savings strategies obsolete within the next decade or two. However, this perspective has raised eyebrows among financial experts.

Amid rising concerns about the economic impact of AI, Chevron CEO Mike Wirth outlined the company’s strategy to leverage U.S. natural resources to meet the increasing power demands of AI technologies. Wirth assured consumers that the company aims to absorb these costs rather than passing them on to customers, which is particularly important as electricity prices have surged in recent years.

Data centers and AI technologies have been linked to escalating electricity costs across the United States. According to reports, American consumers faced a staggering 42% increase in home power costs compared to a decade ago, raising questions about the sustainability of such growth.

As the implementation of AI technology accelerates, recent polling indicates that many voters believe the integration of AI into society is progressing too quickly. Additionally, there is widespread skepticism regarding the federal government’s ability to effectively regulate these emerging technologies.

Privacy concerns have also come to the forefront, particularly with the rise of popular mobile applications like Chat & Ask AI. This app, which boasts over 50 million users on platforms such as Google Play and the Apple App Store, has been criticized by independent security researchers for allegedly exposing hundreds of millions of private chatbot conversations online.

In a more optimistic tone, executives at Alphabet, Google’s parent company, expressed confidence during a recent post-earnings call. They indicated that the company’s substantial investments in artificial intelligence are beginning to yield tangible revenue growth across various sectors of the business.

Sankar further elaborated on the potential of AI in the workplace, describing it as a “massively meritocratic force.” He offered insights to corporate leaders on how to strategically position their companies and employees to thrive in an AI-driven environment.

In a cautionary tale, a woman named Abigail fell victim to a sophisticated scam, believing she was in a romantic relationship with a well-known actor. The messages, voice, and video appeared authentic, leading her to lose over $81,000 and her paid-off home, which she had intended to use for retirement.

As discussions surrounding artificial intelligence continue to evolve, it is crucial for individuals and organizations to remain informed about the implications of these technologies on society and the economy. For ongoing updates and insights into AI advancements, readers can turn to Fox News.

According to Fox News, the conversation around AI is just beginning, and understanding its impact will be essential for navigating the future.

Mars’ Red Color Linked to Potentially Habitable Past, Study Finds

Mars’ reddish hue may be linked to a mineral called ferrihydrite, suggesting the planet had a habitable environment capable of sustaining liquid water in its ancient past, according to a new study.

A recent study has revealed that the distinctive red color of Mars is primarily due to a mineral known as ferrihydrite, which forms in the presence of cool water. This finding challenges previous assumptions that hematite was the main contributor to the planet’s iconic hue.

Ferrihydrite is unique in that it forms at lower temperatures than other minerals found on Mars, indicating that the planet may have once had conditions suitable for liquid water before transitioning to its current dry state billions of years ago. NASA highlighted this potential in a news release this week, noting that the agency partially funded the study.

The research, published in the journal Nature Communications, involved an analysis of data collected from various Mars missions, including those conducted by several rovers. The team compared this data to laboratory experiments designed to simulate Martian conditions, where they tested how light interacts with ferrihydrite particles and other minerals.

Adam Valantinas, the study’s lead author and a postdoctoral fellow at Brown University, explained the historical context of the research. “The fundamental question of why Mars is red has been considered for hundreds, if not thousands, of years,” he stated. Valantinas, who began this research as a Ph.D. student at the University of Bern in Switzerland, emphasized the significance of their findings. “From our analysis, we believe ferrihydrite is present in the dust and likely in the rock formations as well,” he added.

While ferrihydrite’s role in Mars’ coloration has been suggested before, this study provides a more robust framework for testing the hypothesis using both observational data and innovative laboratory techniques that replicate Martian dust.

Jack Mustard, the senior author of the study and a professor at Brown University, described the research as a “door-opening opportunity.” He noted the importance of the ongoing sample collection by the Perseverance rover, stating, “When we get those back, we can actually check and see if this is right.” Mustard’s comments underline the potential for future discoveries regarding Mars’ geological history.

The study suggests that Mars may have once had a cool, wet climate that could have supported life. Although the planet’s current atmosphere is too cold to sustain life, evidence indicates that it once had an abundance of water, as reflected in the presence of ferrihydrite in its dust.

Geronimo Villanueva, Associate Director for Strategic Science at NASA’s Goddard Space Flight Center and a co-author of the study, remarked on the implications of the findings. “These new discoveries point to a potentially habitable past for Mars and highlight the value of coordinated research between NASA and its international partners when exploring fundamental questions about our solar system and the future of space exploration,” he said.

Valantinas further elaborated on the research objectives, stating, “What we want to understand is the ancient Martian climate and the chemical processes on Mars—not only ancient but also present.” He also addressed the habitability question, asking, “Was there ever life?” To answer this, researchers need to understand the conditions that existed during the formation of ferrihydrite.

According to Valantinas, the formation of ferrihydrite requires specific conditions where oxygen from the atmosphere or other sources interacts with iron in the presence of water. These conditions were markedly different from today’s dry and cold environment. As Martian winds spread the dust across the planet, they contributed to Mars’ iconic red appearance.

As research continues, the findings from this study may reshape our understanding of Mars’ geological history and its potential to have supported life in the past, paving the way for future exploration and discovery.

According to NASA, the implications of this research extend beyond just understanding Mars’ color; they may also provide insights into the planet’s capacity to host life in its ancient past.

European Union Alleges TikTok Violates Technology Laws with Addictive Features

The European Union has formally accused TikTok of violating technology laws by employing addictive design features that may harm users, particularly minors, as part of a broader regulatory crackdown on social media platforms.

The European Commission has issued preliminary findings alleging that TikTok’s platform design deliberately fosters addictive behavior among its European user base.

On Friday, the European Union escalated its regulatory scrutiny of the social media landscape by formally accusing TikTok of violating the bloc’s landmark technology laws. The European Commission, the EU’s executive arm, claims that the platform employs specific “addictive design” features that may compromise the mental and physical well-being of its users, particularly minors. This move signifies a significant escalation in the ongoing tension between Brussels and major technology firms regarding the long-term societal impacts of digital consumption.

Central to the Commission’s allegations are several hallmark features of the TikTok user experience, including the infinite scroll mechanism, default autoplay settings, and frequent push notifications. The investigation also focuses on the platform’s highly personalized recommender system, which regulators argue creates a “rabbit hole” effect that can be difficult for users to escape. The EU contends that these tools were designed to maximize engagement at the expense of user health, creating a feedback loop that constitutes a breach of the Digital Services Act.

Under the Digital Services Act, large online platforms are legally required to assess and mitigate systemic risks associated with their services. The European Commission asserts that TikTok failed to conduct a sufficiently rigorous assessment of how its design choices impact the psychological development of its younger demographic. Furthermore, the findings suggest that TikTok’s existing safety measures, such as parental controls and screen-time management tools, are insufficient to counteract the compulsiveness inherent in the platform’s primary interface.

Henna Virkkunen, the European Commission’s Executive Vice President for Tech Sovereignty, Security, and Democracy, emphasized the gravity of the situation in a public statement. She noted that social media addiction can have profound and detrimental effects on the developing minds of children and teenagers, leading to issues ranging from sleep deprivation to increased anxiety. Virkkunen asserted that the Digital Services Act was specifically designed to hold platforms accountable for these outcomes, reinforcing Europe’s commitment to protecting its citizens from digital harms.

In response to the allegations, TikTok has firmly denied the Commission’s findings, characterizing them as a fundamental misunderstanding of its platform. A spokesperson for the company stated that the EU’s depiction of TikTok is categorically false and meritless. TikTok has vowed to challenge the findings through all available legal channels, maintaining that it has consistently invested in safety features and transparency measures to support its community in Europe and beyond.

This legal friction follows a previous encounter between TikTok and EU regulators. In October, the company was found in violation of the Digital Services Act for failing to provide independent researchers with adequate access to public data. While TikTok managed to avoid a significant financial penalty in that instance by agreeing to a series of transparency commitments in December, this latest accusation regarding addictive design represents a more fundamental challenge to its core business model and user experience design.

The European Union’s move aligns with a growing global trend of litigation and regulation targeting the design architecture of social media apps. Recently, TikTok reached a settlement in a separate case where it was accused, alongside several other major tech firms, of intentionally designing its platform to foster addiction in children. Snap, the parent company of Snapchat, also reached a settlement shortly before its case was scheduled to go to trial, reflecting a shift in how these companies approach legal liability regarding user health.

The broader legal battle continues to unfold in courtrooms elsewhere. A high-profile trial involving Meta and YouTube proceeded last week after those companies chose not to settle. These cases are being closely monitored by regulators and industry analysts alike, as they could set a significant precedent for how the concept of “addictive design” is defined and regulated under modern consumer protection laws. The outcome of the EU’s investigation could lead to substantial fines, potentially reaching up to six percent of a company’s global annual turnover under the Digital Services Act.

The Digital Services Act is part of a duo of comprehensive tech laws, alongside the Digital Markets Act, intended to curb the power of “gatekeeper” platforms and ensure a safer digital environment. By targeting the algorithmic and structural elements of TikTok, the EU is signaling that it will no longer accept a hands-off approach to platform moderation. This focus on “recommender systems” is particularly notable, as these algorithms are the primary drivers of content discovery and user retention for modern social media companies.

Critics of the tech industry have long argued that the design choices mentioned by the Commission—such as the lack of a natural stopping point in an infinite scroll—are not accidental but are intentional psychological triggers. The EU’s investigation will now move into a more formal phase, where TikTok will have the opportunity to present evidence in its defense. However, the preliminary nature of these findings suggests that the Commission is confident in its initial assessment that the platform’s current safeguards are inadequate for the scale of the risk.

Beyond the legal implications, the investigation highlights a deepening divide between the regulatory philosophies of Europe and the United States. While the U.S. has seen various state-level efforts and individual lawsuits against tech giants, the EU’s centralized enforcement of the Digital Services Act provides a unified regulatory front that is unique in its reach and authority. This centralized approach allows the Commission to act as a singular watchdog for hundreds of millions of users, putting immense pressure on global companies to harmonize their safety standards with European law.

As the case progresses, the tech industry will be looking for clarity on what constitutes a “safe” design. If features like autoplay and personalized feeds are deemed inherently harmful by European regulators, it may force a total redesign of many popular applications. For TikTok, which relies heavily on its proprietary algorithm to maintain its competitive edge, the stakes could not be higher. The company must now prove that its engagement metrics do not come at the cost of the digital health of its most vulnerable users.

The timeline for a final decision remains uncertain, but the European Commission has signaled that it intends to move swiftly. Given the public nature of the accusations and the high-profile statements from EU leadership, it is clear that Brussels views this case as a landmark opportunity to define the boundaries of platform responsibility in the twenty-first century. For now, the tech world remains in a state of high alert as the definition of digital safety continues to be rewritten in the halls of European governance.

According to GlobalNetNews.

Tech Layoffs in 2026: A Comprehensive Overview

Tech layoffs continue to pose significant challenges in early 2026, following a tumultuous year for the industry in 2025.

The tech industry is grappling with ongoing layoffs as 2026 unfolds, echoing the difficulties faced in the previous year. In 2025, mass layoffs raised concerns about job security and the overall health of the job market, particularly amid increasing automation and the growing use of artificial intelligence. As the new year begins, major companies are continuing to announce job cuts, signaling that the trend is far from over.

Amazon has been at the forefront of these layoffs, cutting approximately 16,000 jobs in January, followed by an additional 2,200 in early February. These reductions are part of CEO Andy Jassy’s strategic initiative to streamline operations, reduce bureaucracy, and divest from underperforming business segments. Since October 2025, Amazon’s layoffs have totaled around 18,200 positions.

Ericsson, the telecommunications giant, has also announced plans to eliminate 1,600 jobs in Sweden. This decision is part of the company’s ongoing cost-saving measures aimed at navigating a prolonged downturn in telecom spending. Ericsson’s commitment to these measures underscores the challenges faced by the industry as it adapts to changing market conditions.

Chipmaking company ASML is set to cut around 1,700 jobs across the Netherlands and the United States. The layoffs are intended to bolster the company’s focus on engineering and innovation, with the majority of cuts affecting leadership roles within its technology and IT teams.

Meta, the parent company of Facebook, has laid off 1,500 employees as part of a restructuring of its Reality Labs division. This move comes as Meta shifts its investment focus from the Metaverse to wearable technology, following disappointing traction in the Metaverse space.

Autodesk, known for its design software, has announced it will reduce its global workforce by approximately 1,000 jobs, representing about 7% of its total employees. The company aims to redirect its spending towards its cloud platform and artificial intelligence initiatives, with the majority of job cuts affecting customer-facing sales teams.

Pinterest is also restructuring, planning to lay off nearly 15% of its workforce. This decision aligns with the company’s strategy to allocate more resources towards artificial intelligence, as it seeks to support transformation initiatives and prioritize AI-driven products.

Sapiens, a software provider, has revealed plans to cut hundreds of jobs, with the most significant impacts expected in India and the United States. Reports suggest that approximately 540 employees will be affected, although the distribution of layoffs will not be uniform across regions.

Additionally, Oracle is reportedly considering laying off around 30,000 employees and selling its health tech unit, Cerner, according to analysts at TD Cowen. While the full extent of the layoffs remains uncertain, the early announcements in 2026 indicate a challenging year ahead for tech employees.

As these companies navigate their respective challenges, the ongoing trend of layoffs raises questions about the future of employment in the tech sector. The impact of automation and artificial intelligence continues to reshape the landscape, leaving many employees uncertain about their job security.

According to The American Bazaar, the developments in the tech industry signal a need for adaptability and resilience among workers as they face an evolving job market.

OpenAI Experiences Senior Leadership Departures Amid ChatGPT Expansion

OpenAI is experiencing a significant turnover among its senior leadership as CEO Sam Altman reallocates resources to enhance ChatGPT, sidelining long-term research initiatives.

OpenAI has recently witnessed a wave of senior-level departures following CEO Sam Altman’s directive to prioritize resources for ChatGPT, according to a report by the Financial Times. This strategic shift has redirected computing power and personnel away from experimental projects, leading to high-profile exits within the organization.

Among those who have left is Jerry Tworek, the vice president of research, who departed in January after spending seven years at OpenAI. Tworek had been advocating for increased resources for his work on AI reasoning and continuous learning—the capability of models to assimilate new information without losing previously acquired knowledge. His efforts reportedly culminated in a standoff with chief scientist Jakub Pachocki, who favored focusing on OpenAI’s existing architecture around large language models, which he deemed more promising.

The departures follow Altman’s issuance of an internal “code red” in December 2025, during which he emphasized the urgent need for improvements in ChatGPT’s speed, personalization, and reliability. This memo effectively shelved initiatives related to advertising, AI shopping agents, and a personal assistant project known as Pulse. The code red was prompted by the emergence of Google’s Gemini 3, which surpassed OpenAI in key performance benchmarks, resulting in a surge in Alphabet’s stock value.

At OpenAI, researchers are required to apply for computing “credits” from top executives to initiate their projects. According to ten current and former employees who spoke with the Financial Times, those working on projects outside of large language models have increasingly found their requests either denied or granted insufficient resources to effectively pursue their research.

Teams responsible for projects like the video generator Sora and the image tool DALL-E have expressed feelings of neglect, as their work has been deemed less critical to the ChatGPT initiative. One senior employee remarked that they “always felt like a second-class citizen” compared to the primary focus areas. Over the past year, several projects unrelated to language models have been quietly phased out.

In January, Andrea Vallone, who led model policy research, joined competitor Anthropic after being assigned what she described as an “impossible” task—ensuring the mental well-being of users who were becoming emotionally attached to ChatGPT.

OpenAI’s pivot towards ChatGPT comes amid intensifying competition in the AI landscape. Google’s Gemini now boasts 650 million monthly users, a significant increase from 450 million in July 2025. Additionally, Anthropic has captured 40% of the enterprise market share, compared to OpenAI’s 27%, according to data from Menlo Ventures. Chief Research Officer Mark Chen has stated that foundational research “remains central” to OpenAI’s mission and still accounts for the majority of the company’s computing resources. However, many researchers feel that the current focus on optimizing a chatbot diverges from their original intentions for joining the organization.

The ongoing shifts at OpenAI highlight the challenges faced by the company as it navigates the competitive landscape of artificial intelligence, balancing immediate product demands with long-term research goals.

These developments underscore the complexities of innovation in a rapidly evolving field, where the pressure to deliver results can sometimes overshadow foundational research efforts.

According to the Financial Times, the implications of these changes could have lasting effects on OpenAI’s research capabilities and overall direction.

Microsoft’s Recent Actions Raise Unexpected Privacy Concerns

Microsoft’s provision of BitLocker encryption keys to law enforcement has raised significant concerns about digital privacy and the implications of encrypted data accessibility.

For years, encryption has been heralded as the gold standard for digital privacy, promising to safeguard data from hackers, corporations, and government entities alike. However, recent developments have cast doubt on this assumption. In a federal investigation related to alleged COVID-19 unemployment fraud in Guam, Microsoft confirmed it provided law enforcement with BitLocker recovery keys, enabling investigators to unlock encrypted data on several laptops.

This incident marks one of the clearest public examples of Microsoft complying with law enforcement requests for BitLocker recovery keys during a criminal investigation. While the warrant may have been lawful, the implications extend far beyond this single case. For many Americans, this situation serves as a stark reminder that “encrypted” does not always equate to “inaccessible.”

Federal investigators believed that three Windows laptops contained evidence linked to an alleged scheme involving pandemic unemployment funds. These devices were secured with BitLocker, Microsoft’s built-in disk encryption tool that is enabled by default on many modern Windows PCs. BitLocker encrypts all data on a hard drive, rendering it unreadable without a recovery key. Users can choose to store this key themselves, but Microsoft encourages backing it up to a Microsoft account for convenience. In this instance, that convenience proved significant. Upon receiving a valid search warrant, Microsoft provided the recovery keys to investigators, granting them full access to the data on the devices.

According to Microsoft, the company receives approximately 20 such requests annually and can only comply when users have opted to store their keys in the cloud. Attempts to reach Microsoft for further comment were unsuccessful before the article’s deadline.

John Ackerly, CEO and co-founder of Virtru and a former White House technology advisor, emphasizes that the issue lies not with encryption itself but with who controls the keys. He explains that the convenience of backing up BitLocker recovery keys to a Microsoft account means that Microsoft retains the technical ability to unlock a customer’s device. “When a third party holds both encrypted data and the keys required to decrypt it, control is no longer exclusive,” Ackerly states.

He warns that once a provider has the capability to unlock data, that power rarely remains theoretical. “When systems are built so that providers can be compelled to unlock customer data, lawful access becomes a standing feature. It is important to remember that encryption does not distinguish between authorized and unauthorized access,” he adds. “Any system designed to be unlocked on demand will eventually be unlocked by unintended parties.”

Ackerly points out that this outcome is not inevitable. Other technology companies have made different architectural choices. For instance, Apple has designed systems that limit its ability to access customer data, even when complying with government requests. Google offers client-side encryption models that allow users to retain exclusive control of their encryption keys. These companies comply with the law, but since they do not hold the keys, they cannot unlock the data. This distinction is crucial.

He believes Microsoft has the opportunity to change its approach. “Microsoft could address this by making customer-controlled keys the default and by designing recovery mechanisms that do not place decryption authority in Microsoft’s hands,” Ackerly suggests. “True personal data sovereignty requires systems that make compelled access technically impossible, not merely contractually discouraged.” In essence, Microsoft’s ability to comply with the warrant stemmed from a single design decision that transformed encrypted data into accessible data.

A Microsoft spokesperson stated, “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s consumer cloud services. We recognize that some customers prefer Microsoft’s cloud storage, so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

This case has reignited a longstanding debate over lawful access versus systemic risk. Ackerly warns that centralized control has a troubling history. “We have seen the consequences of this design pattern for more than two decades,” he says. “From the Equifax breach, which exposed the financial identities of nearly half the U.S. population, to repeated leaks of sensitive communications and health data during the COVID era, the pattern is consistent: centralized systems that retain control over customer data become systemic points of failure. These incidents are not anomalies; they reflect a persistent architectural flaw.”

When companies hold the keys, they become targets for hackers, foreign governments, and legal demands from agencies like the FBI. Once a capability exists, it is rarely left unused. Apple has implemented systems, such as Advanced Data Protection, that prevent it from accessing certain encrypted user data, even when faced with government requests. Google also offers client-side encryption for some services, primarily in enterprise environments, where encryption keys remain under the customer’s control. This distinction is vital, as encryption experts often note: you cannot hand over what you do not have.

While personal privacy is not entirely lost, it now requires intentionality. Small choices can have significant implications. Ackerly emphasizes the importance of understanding control: “If you don’t control your encryption keys, you don’t fully control your data.” This control begins with knowing where your keys are stored. If they are kept in the cloud with your provider, your data may be accessible without your knowledge.

Once keys are outside your control, access becomes possible without your consent. Therefore, the manner in which data is encrypted is just as important as whether it is encrypted. Consumers should seek tools and services that encrypt data before it reaches the cloud, ensuring that providers cannot access it. Defaults often favor convenience, and many users do not change them. “Users should also look to avoid default settings designed for convenience,” Ackerly advises. “When convenience is the default, most individuals will unknowingly trade control for ease of use.”

When encryption is designed so that even the provider cannot access the data, the balance shifts back to the individual. “When data is encrypted in a way that even the provider can’t access, it stays private — even if a third party comes asking,” Ackerly states. “By holding your own encryption keys, you’re eliminating the possibility of the provider sharing your data.” He concludes with a straightforward lesson: “You cannot outsource responsibility for your sensitive data and assume that third parties will always act in your best interest. Encryption only fulfills its purpose when the data owner is the sole party capable of unlocking it.”

Microsoft’s decision to comply with the BitLocker warrant may have been legal, but it raises critical questions about modern encryption. Privacy relies less on mathematical algorithms and more on how systems are constructed. When companies hold the keys, the risk shifts to the users.

As individuals navigate this landscape, they must consider whether they trust tech companies to protect their encrypted data or if they believe that responsibility should rest solely with them. Understanding the implications of encryption and key management is essential for safeguarding personal privacy in an increasingly interconnected world.

According to CyberGuy, the choices users make regarding encryption and key management can significantly impact their digital privacy.

Private Lunar Lander Blue Ghost Successfully Lands on the Moon

A private lunar lander, Blue Ghost, successfully landed on the moon on Sunday, delivering equipment for NASA and marking a significant milestone for commercial space exploration.

A private lunar lander carrying essential equipment for NASA successfully touched down on the moon on Sunday. The landing was confirmed by the company’s Mission Control team, based in Texas.

Firefly Aerospace’s Blue Ghost lander made its descent from lunar orbit using autopilot technology, targeting the slopes of an ancient volcanic dome located in an impact basin on the moon’s northeastern edge. The successful landing was a significant achievement in the growing field of commercial lunar exploration.

Will Coogan, Firefly’s chief engineer for the lander, expressed excitement upon confirmation of the landing, stating, “You all stuck the landing. We’re on the moon.” This upright and stable landing positions Firefly as the first private company to successfully deliver a spacecraft to the moon without crashing or tipping over, a feat that has eluded some government space programs in the past. Historically, only five countries—Russia, the United States, China, India, and Japan—have achieved successful lunar landings.

The Blue Ghost lander, named after a rare species of firefly found in the United States, stands 6 feet 6 inches tall and spans 11 feet wide, providing enhanced stability during its lunar operations. Approximately half an hour after landing, the Blue Ghost began transmitting images from the lunar surface, with its first picture being a selfie, albeit partially obscured by the sun’s glare.

Looking ahead, two other companies are preparing to launch their lunar missions, with the next lander expected to join Blue Ghost on the moon later this week. This surge in private lunar exploration reflects a broader trend of increasing commercial interest in space, paving the way for future astronaut missions and scientific research on the moon.

According to The Associated Press, the successful landing of Blue Ghost marks a pivotal moment for Firefly Aerospace and the burgeoning commercial space industry.

Satyajayant Misra Appointed Co-Chair of Tokyo INFOCOM 2026 Committee

An Indian American professor has been appointed co-chair of the Technical Program Committee for the prestigious IEEE INFOCOM 2026 conference in Tokyo.

Satyajayant “Jay” Misra, an Indian American professor and associate dean of research at the New Mexico State University College of Engineering, has been appointed as the Technical Program Committee co-chair for the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Computer Communications 2026. This conference is recognized as one of the most prestigious events in the field of computer networking and communications.

Misra will co-chair the event alongside Professor Tian Lan from George Washington University. The IEEE INFOCOM conference serves as a premier international forum for presenting advances in computer communications, drawing leading researchers, industry experts, and academics from around the globe.

Scheduled to take place from May 18 to May 21, 2026, in Tokyo, Japan, the conference will feature a variety of activities, including keynote addresses, technical paper presentations, panels, workshops, tutorials, poster sessions, and programming aimed at students. This event continues a tradition that spans over four decades, dedicated to advancing the state of the art in networking research.

“INFOCOM continues to be one of the selective conferences for which networking and cybersecurity researchers work for a year or more to submit a high-quality paper,” Misra stated. “When I was a student, it was my dream to get a paper into INFOCOM any given year. It continues to be a high-impact venue. INFOCOM 2026 will bring researchers from all continents to spend four days in Tokyo, presenting and discussing cutting-edge research ideas.”

As co-chair of the Technical Program Committee, Misra will oversee the highly selective peer-review process, which involves more than 400 researchers from around the world. His responsibilities include building the technical program and ensuring the overall quality and impact of the research presented at the conference.

This role is considered one of the highest forms of professional service in the field, typically reserved for researchers who have made significant and sustained contributions. Misra joins a distinguished lineage of technical leaders associated with IEEE INFOCOM.

David Jáuregui, interim dean of the NMSU College of Engineering, remarked on Misra’s appointment, stating, “Dr. Misra’s appointment as Technical Program Committee co-chair of IEEE INFOCOM 2026 is a significant achievement. Serving in this role places NMSU alongside leading research institutions from around the world, underscoring the growing international visibility of our research efforts. It reflects not only Dr. Misra’s sustained scholarly leadership but also NMSU’s expanding contributions to advancing research in computer science, engineering, and emerging technologies on the global stage.”

For INFOCOM 2026, nearly 1,800 research papers were submitted from institutions worldwide, with approximately 330 papers accepted for presentation. Misra noted that this reflects the competitive nature and high standards for scholarly excellence associated with the conference.

“This year we had an increase of more than 20 percent in submitted papers, and this shows the growing interest in INFOCOM,” Misra explained. “The paper selection process is multi-level with significant oversight by seasoned researchers in the community, and it is rigorous and selective.”

The selection process lasts over five months and involves several rounds of anonymous interactions among reviewers for each paper. This culminates in a technical program committee meeting where borderline papers are adjudicated.

Misra’s role at INFOCOM 2026 highlights not only his personal achievements but also the increasing prominence of New Mexico State University in the global research community.

According to The American Bazaar, this appointment underscores the importance of collaboration and innovation in the rapidly evolving field of computer communications.

Waymo Faces Federal Investigation Following Child Struck by Vehicle

A Waymo autonomous vehicle struck a child near a Santa Monica school, leading to a federal investigation into the safety of self-driving cars in school zones.

Federal safety regulators are intensifying their scrutiny of self-driving cars following a serious incident involving Waymo, the autonomous vehicle company owned by Alphabet. The investigation focuses on a Waymo vehicle that struck a child near an elementary school in Santa Monica, California, during morning drop-off hours.

The crash occurred on January 23, raising immediate concerns about the behavior of autonomous vehicles in school zones and their ability to respond to unpredictable pedestrian movements. On January 29, the National Highway Traffic Safety Administration (NHTSA) confirmed it had opened a preliminary investigation into Waymo’s automated driving system.

According to documents released by the NHTSA, the incident took place within two blocks of the elementary school during peak drop-off times. The area was bustling with activity, including multiple children, a crossing guard, and several vehicles double-parked along the street.

Investigators reported that the child ran into the roadway from behind a double-parked SUV while heading toward the school. The Waymo vehicle struck the child, who sustained minor injuries. Notably, there was no safety operator inside the vehicle at the time of the incident.

The NHTSA’s Office of Defects Investigation is examining whether the autonomous system acted with appropriate caution given its proximity to a school zone and the presence of young pedestrians. The investigation will assess how Waymo’s automated driving system is designed to operate in and around school zones, particularly during busy pickup and drop-off times.

This includes evaluating whether the vehicle adhered to posted speed limits, how it responded to visual cues such as crossing guards and parked vehicles, and whether its post-crash response met federal safety standards. The agency is also reviewing Waymo’s actions following the incident.

Waymo stated that it voluntarily contacted regulators on the same day as the crash and expressed its commitment to cooperating fully with the investigation. In a statement, the company emphasized its dedication to improving road safety for both riders and other road users.

“At Waymo, we are committed to improving road safety, both for our riders and all those with whom we share the road,” the company said. “Part of that commitment is being transparent when incidents occur, which is why we are sharing details regarding an event in Santa Monica, California, on Friday, January 23, where one of our vehicles made contact with a young pedestrian.”

Waymo explained that the incident occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into the vehicle’s path. The Waymo technology detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made.

“To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph,” Waymo stated. “This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.”

Following the incident, the pedestrian stood up immediately, walked to the sidewalk, and 911 was called. The vehicle remained stopped, moved to the side of the road, and stayed there until law enforcement cleared it to leave the scene. Waymo emphasized that this event highlights the critical value of its safety systems.

Waymo vehicles are classified as Level 4 autonomy on the NHTSA’s six-level scale. At Level 4, the vehicle manages all driving tasks within specific service areas, and a human driver is not required to intervene. However, these systems do not operate everywhere and are currently limited to ride-hailing services in select cities.

The NHTSA has clarified that Level 4 vehicles are not available for consumer purchase, even though passengers may ride inside them. This latest investigation follows a previous NHTSA evaluation that began in May 2024, which examined reports of Waymo vehicles colliding with stationary objects like gates, chains, and parked cars. That investigation was closed in July 2025 after regulators reviewed the data and Waymo’s responses.

Safety advocates argue that the new incident underscores ongoing concerns regarding the operation of autonomous vehicles, particularly in sensitive environments like school zones. The investigation could influence how regulators establish expectations for autonomous driving systems near schools, playgrounds, and other areas with vulnerable pedestrians.

For parents, commuters, and riders, the outcome of this investigation may affect where and when autonomous vehicles are permitted to operate. The challenges posed by self-driving technology highlight the complexities of ensuring safety in scenarios involving human unpredictability, especially when children are involved.

Federal investigators now face a crucial question: Did the system act as cautiously as it should have in one of the most sensitive driving environments possible? The answer to this question could play a significant role in shaping the future of autonomous vehicle regulation in the United States.

For further insights, please refer to Fox News.

Athena Lunar Lander Reaches Moon; Condition Still Uncertain

Athena lunar lander successfully reached the moon, but mission controllers remain uncertain about its condition and exact landing location.

Mission controllers have confirmed that the Athena lunar lander successfully touched down on the moon earlier today. However, they are still uncertain about the spacecraft’s condition following its landing, according to the Associated Press.

The precise location of Athena’s landing remains unclear. The lander, which is operated by Intuitive Machines, was equipped with an ice drill, a drone, and two rovers. Despite the uncertainty surrounding its status, officials reported that Athena was able to establish communication with its controllers.

Tim Crain, mission director and co-founder of Intuitive Machines, was heard instructing his team to “keep working on the problem,” even as the craft sent apparent “acknowledgments” back to the team in Texas.

The live stream of the landing was concluded by NASA and Intuitive Machines, who announced plans to hold a news conference later today to provide updates on Athena’s status.

This event follows a significant milestone in lunar exploration, as Athena becomes the second craft to land on the moon this week. On Sunday, Firefly Aerospace’s Blue Ghost successfully made its landing, marking a historic achievement as the first private company to deploy a spacecraft on the moon without it crashing or tipping over. Will Coogan, chief engineer for Blue Ghost, celebrated the accomplishment, stating, “You all stuck the landing. We’re on the moon.”

Last year, Intuitive Machines faced challenges with its Odysseus lander, which landed sideways, adding pressure to the success of today’s mission. The outcomes of both Athena and Blue Ghost represent significant advancements in private lunar exploration.

As the situation develops, further details about Athena’s condition and mission objectives are anticipated during the upcoming news conference, according to the Associated Press.

Uber Appoints Indian-American Balaji Krishnamurthy as CFO Amid Expansion

Uber has appointed Balaji Krishnamurthy as its new CFO, marking a significant shift toward a driverless future and an aggressive expansion of its robotaxi services.

Uber Technologies Inc. has announced the appointment of Balaji Krishnamurthy as its next chief financial officer, effective February 16. This move signals a major strategic shift for the company, as it intensifies its focus on autonomous vehicle partnerships and the development of a driverless future.

Krishnamurthy, who has been a long-time advocate for self-driving technology within Uber, currently serves as the vice president of strategic finance and investor relations. He will succeed Prashanth Mahendra-Rajah, who is stepping down after 27 months in the role to pursue new opportunities. This leadership change was revealed alongside Uber’s fourth-quarter earnings report, emphasizing the company’s pivot from developing its own autonomous hardware to becoming a leading global platform for robotaxi services.

At 41 years old, Krishnamurthy has played a pivotal role in Uber’s “asset-light” strategy, which focuses on partnerships rather than ownership of autonomous vehicles. He has also served on the board of Waabi, an autonomous trucking startup in which Uber recently increased its investment.

“Balaji knows Uber’s business inside and out and is a brilliant, decisive strategist,” said CEO Dara Khosrowshahi. “I am thrilled for him to step up as CFO as we kick off another big year.”

The upcoming year is poised to be significant for Uber, which plans to facilitate autonomous trips in up to 15 cities worldwide by the end of 2026. This ambitious expansion relies heavily on strategic partnerships, including a notable collaboration with Alphabet’s Waymo to introduce robotaxis in Austin and Atlanta, as well as a joint effort with Lucid and Nuro to deploy custom-built autonomous electric vehicles.

During a recent call with investors, Krishnamurthy highlighted Uber’s robust cash flow, which has seen a 20% year-over-year revenue increase, reaching $14.37 billion. He stated that this financial strength would allow the company to “invest with discipline” in the autonomous vehicle sector.

“We are entering 2026 with strong momentum,” Krishnamurthy noted. “We will invest across a multitude of opportunities, including positioning Uber to win in an AV future.”

However, the transition comes at a challenging time for Uber’s stock. Following the announcement of Krishnamurthy’s appointment, shares fell approximately 6%, as investors reacted to a first-quarter profit outlook that fell short of Wall Street expectations. This conservative guidance is partly due to the capital-intensive nature of scaling autonomous infrastructure and the costs associated with integrating new AI-driven software.

Outgoing CFO Mahendra-Rajah leaves behind a legacy of financial stabilization, having played a key role in helping Uber achieve investment-grade status and launching the company’s first-ever share buyback program. He will remain with the company as a senior advisor until July 1 to ensure a smooth transition.

As Uber shifts from being primarily a ride-hailing app to a high-tech logistics coordinator, Krishnamurthy’s appointment underscores the company’s commitment to not just preparing for a driverless future but actively investing in it.

According to The American Bazaar, this strategic shift reflects Uber’s determination to lead in the evolving landscape of autonomous transportation.

U.S. DOE Appoints Indian-Americans to Key Advisory Positions

The U.S. Department of Energy has appointed three Indian-American scientists to its newly established advisory committee, emphasizing their expertise in energy and technology.

The U.S. Department of Energy (DOE) has appointed three Indian-American scientists to its newly formed Office of Science Advisory Committee (SCAC), which is tasked with shaping the future of U.S. science and technology policy.

The SCAC will provide independent guidance on research priorities, emerging technologies, and cross-cutting scientific challenges that impact the nation’s energy agenda. This initiative comes at a critical time when the U.S. government is emphasizing innovation in fields such as fusion energy, quantum computing, and artificial intelligence.

Among the 21 members appointed to the advisory panel are Supratik Guha, Suresh Garimella, and A.N. Sreeram. Each brings a wealth of expertise in materials science, engineering, and advanced manufacturing.

Supratik Guha is a professor at the University of Chicago’s Pritzker School of Molecular Engineering and a researcher at Argonne National Laboratory. He has dedicated much of his career to the intersection of nanoscience and applied technology. Guha previously led Argonne’s Center for Nanoscale Materials and spent two decades at IBM Research, focusing on nanoscale materials and devices.

Suresh Garimella serves as the president of the University of Arizona and is a trained mechanical engineer with extensive academic and advisory experience. He has been a member of the National Science Board, a presidentially appointed body that oversees the National Science Foundation. Additionally, Garimella has held advisory roles with Sandia National Laboratories and the U.S. State Department, focusing on scientific collaboration.

A.N. Sreeram is the senior vice president and chief technology officer at Dow, where he holds more than 20 patents and has a long history in industrial research. His work emphasizes accelerating the transformation of scientific breakthroughs into commercial products. Sreeram has also served on the White House’s President’s Council of Advisors on Science and Technology.

Another notable member of Indian origin is Pushmeet Kohli, a British Indian computer scientist and vice president of science and strategic initiatives at Google DeepMind. His work primarily focuses on machine learning and AI-driven discovery.

Officials have indicated that SCAC’s broad mandate includes advising on federal research priorities, facilitating collaboration across national laboratories and universities, and helping the Department of Energy anticipate and adapt to new technological trends. The committee is expected to play a strategic role as the U.S. navigates competition in critical fields such as quantum science and climate-related technologies.

DOE Under Secretary for Science Darío Gil, who oversees the Office of Science, highlighted the importance of diverse expertise in achieving the department’s mission. “By bringing together leading minds from diverse institutions, we’re forging a collaborative framework that will accelerate the translation of fundamental research into tangible benefits for the American people,” Gil stated.

The appointments reflect the growing influence of Indian-Americans in U.S. science and the DOE’s commitment to harnessing global talent to advance national research priorities. The advisory committee is set to serve through January 2028, with its findings expected to inform DOE decisions.

SCAC will be chaired by Persis Drell, a professor of materials science and engineering and physics at Stanford University, who is also the provost emerita of Stanford and director emerita of SLAC National Accelerator Laboratory. The committee will adopt the core functions of the Office of Science’s six former discretionary advisory committees.

According to The American Bazaar, the establishment of SCAC marks a significant step in integrating diverse expertise into U.S. energy policy and research initiatives.

149 Million Passwords Exposed in Major Credential Leak

Over 149 million stolen credentials, including 48 million Gmail accounts, were exposed online, raising significant concerns about password security and the risks associated with credential reuse.

A massive database containing 149 million stolen logins and passwords has been discovered publicly exposed online, marking a troubling start to the year for password security. Among the compromised data are credentials linked to an estimated 48 million Gmail accounts, as well as millions from other popular services.

Cybersecurity researcher Jeremiah Fowler, who uncovered the database, confirmed that it was neither password-protected nor encrypted. This means that anyone who stumbled upon it could access the sensitive information without any barriers.

The database comprises 149,404,754 unique usernames and passwords, totaling approximately 96 gigabytes of raw credential data. Fowler noted that the exposed files contained email addresses, usernames, passwords, and direct login URLs for various platforms. Some records even indicated the presence of info-stealing malware, which can silently capture credentials from infected devices.

Importantly, this incident does not represent a new breach of Google, Meta, or other companies. Instead, the database appears to be a compilation of credentials stolen over time from previous breaches and malware infections. While this distinction is critical, the risk to users remains substantial.

Fowler estimates that email accounts dominate the dataset, which is particularly concerning because access to an email account often facilitates access to other accounts. A compromised email inbox can be exploited to reset passwords, access private documents, read years of messages, and impersonate the account holder. The prevalence of Gmail credentials in this database raises alarms that extend beyond any single service.

This exposed database was not a relic of the past; the number of records increased while Fowler was investigating it, suggesting that the malware responsible for the data collection was still active. Additionally, there was no ownership information associated with the database. After multiple attempts to alert the hosting provider, it took nearly a month for the database to be taken offline. During that time, anyone with internet access could have searched through the data, heightening the stakes for everyday users.

It is crucial to note that hackers did not breach Google or Meta systems directly. Instead, malware infected individual devices and harvested login details as users typed them or stored them in browsers. This type of malware is often disseminated through fake software updates, malicious email attachments, compromised browser extensions, or deceptive advertisements. Changing passwords alone will not mitigate the risk if the malware remains on the device.

To protect yourself, it is essential to take proactive steps, even if everything appears fine at the moment. Credential leaks like this often resurface weeks or months later. One of the most significant risks highlighted by this database is password reuse. If attackers gain access to one working login, they frequently test it across multiple sites automatically.

Start by changing reused passwords, prioritizing email, financial, and cloud accounts. Each account should have a unique password. Consider using a password manager to securely store and generate complex passwords, which can significantly reduce the risk of password reuse.

Next, check if your email has been exposed in past breaches. Many password managers include a built-in breach scanner that can verify whether your email address or passwords have appeared in known leaks. If you find a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Passkeys are another option to consider, as they replace traditional passwords with device-based authentication tied to biometrics or hardware. This means there is nothing for malware to steal. Major platforms, including Gmail, already support passkeys, and their adoption is on the rise. Enabling passkeys now can significantly reduce your attack surface.

Implementing two-factor authentication (2FA) adds an extra layer of security, even if a password is compromised. Whenever possible, use authenticator apps or hardware keys instead of SMS for 2FA, as this step alone can thwart most account takeover attempts linked to stolen credentials.

Changing passwords will not be effective if malware remains on your device. It is vital to install robust antivirus software and conduct a full system scan. Remove anything flagged as suspicious before updating passwords or security settings. Keeping your operating system and browsers fully updated is also crucial.

To safeguard against malicious links that could install malware and potentially access your private information, having strong antivirus software on all your devices is essential. This protection can also alert you to phishing emails and ransomware scams, helping to keep your personal information and digital assets secure.

Most major services provide recent login locations, devices, and sessions. Regularly check for unfamiliar activity, particularly logins from new countries or devices. If you notice anything suspicious, sign out of all sessions if the option is available and reset your credentials immediately.

Stolen credentials are often combined with data scraped from data broker sites, which can include personal information such as addresses, phone numbers, relatives, and work history. Utilizing a data removal service can help reduce the amount of personal information criminals can pair with leaked logins. Less exposed data makes phishing and impersonation attacks more challenging to execute.

While no service can guarantee complete removal of your data from the internet, a data removal service is a wise choice. Though these services can be costly, they actively monitor and systematically erase your personal information from numerous websites, providing peace of mind and effectively reducing your risk of being targeted.

Old accounts can be easy targets, as users often forget to secure them. Closing unused services and deleting accounts tied to outdated app subscriptions or trials can reduce the number of potential entry points for attackers.

This exposed database serves as a stark reminder that credential theft has become an industrial-scale operation. Criminals act quickly and often prioritize speed over security. However, simple steps can still be effective. Unique passwords, strong authentication, malware protection, and basic cyber hygiene can significantly enhance your security. Remain vigilant and proactive in safeguarding your digital presence.

For further information on protecting your online accounts, visit CyberGuy.com.

Artificial Intelligence Drives Development of New Energy Sources

Artificial Intelligence is playing a pivotal role in addressing rising electricity costs and enhancing energy sources, as U.S. consumers face unprecedented power bills amid increasing demand.

Artificial Intelligence (AI) and the proliferation of data centers are significant contributors to the rising electricity costs across the United States. As of December 2025, American consumers are paying 42% more for electricity compared to a decade ago. Exelon CEO Calvin Butler emphasized, “When you have increased demand and inadequate supply, costs are going to go up. And that’s what we’re experiencing right now.”

In 2024, U.S. data centers accounted for over 4% of the total electricity consumption in the country, according to the International Energy Agency. This consumption level is comparable to the annual electricity usage of the entire nation of Pakistan. Projections indicate that U.S. data center electricity consumption could grow by 133% by the end of the decade, reaching levels equivalent to the entire electricity consumption of France.

Butler noted that Exelon, headquartered in Chicago and owner of ComEd—one of the largest utilities in the nation—has seen a significant increase in data center load. “ComEd’s peak load is roughly 23 gigawatts. We have had data center load come onto the system, but by 2030, we’ll be at 19 gigawatts,” he explained. The utility has received a surge of connection requests from data centers, with potential projects totaling over 30 gigawatts expected to come online between now and 2045.

Butler remarked on the unprecedented growth in the sector, stating, “With the data center advent and the technology coming, we’ve been forced to serve that load, which is our responsibility. But what we also have to do is build new generation supply, which is not keeping up with the load that is coming on. And that’s the crunch that we’re in right now.”

In response to the growing demand, Commonwealth Edison is seeking regulatory approval for a $15.3 billion grid update over the next four years. While the U.S. has increased its grid capacity by more than 15% in the past decade, many utility companies and energy producers argue that this expansion is insufficient.

Bob Mumgaard, CEO of Commonwealth Fusion Systems, expressed concern about the current electricity constraints. “You want to make power plants that can make a lot of power in a small package that you can put anywhere, that you could run at any time, and fusion fits that bill,” he said. The company is working to introduce a new form of nuclear energy—fusion—which promises the reliability of traditional nuclear energy without producing long-lived radioactive waste.

“In fusion, there’s no chain reaction. The result is helium, which is safe and inert, and you don’t use it to make anything related to weapons,” Mumgaard added.

As the U.S. grapples with its power crunch, the role of AI in energy innovation is becoming increasingly vital. Commonwealth Fusion Systems is leveraging AI to accelerate the development of fusion energy. “Building and designing these complex machines and manipulating this complex data matter of plasma are all things that we’re still learning and figuring out how to do,” Mumgaard explained. “And that’s an area where we’ve been able to accelerate using AI.”

AI is also poised to enhance under-utilized energy sources, particularly geothermal energy. Despite its potential, geothermal energy has remained a small part of the electric grid due to high drilling costs and uncertainty about optimal infrastructure placement. Joel Edwards, co-founder of Zanskar, highlighted the potential of AI in improving geothermal exploration. “If you could drill the perfect geothermal well every single time, like you pick the right spot, you design the right well, you drill the 5,000, 8,000 feet, you hit 400°F temperatures, that’s incredibly productive,” he stated.

Zanskar is focused on refining the geothermal search process through AI-driven mapping techniques to identify untapped resources. “If we could just get more precise in where we go to find the things and then how we drill into the things, geothermal absolutely has the cost curve to come down,” Edwards noted. “And that’s sort of what we’re running towards, with AI giving us the boost, giving us an edge to do that.”

Both geothermal and nuclear fusion energy sources offer the advantage of producing power consistently, regardless of weather conditions. This capability could have alleviated some of the strain on the grid during recent winter storms. Butler cautioned about the urgency of addressing these energy challenges, likening the situation to driving a car with a persistent check engine light. “We have to pay attention to what’s going on, and this winter storm—Winter Storm Fern—is indicative of what’s coming,” he warned.

The integration of AI into energy production and management is not only a response to rising costs but also a crucial step toward a more sustainable and reliable energy future. As the demand for electricity continues to grow, the role of innovative technologies like AI will be essential in meeting the challenges ahead, according to Fox News.

IIT Alum Sanjiban Choudhury Receives NSF Early Career Development Award

Sanjiban Choudhury, an Indian American robotics researcher, has received the National Science Foundation Faculty Early Career Development Award for his innovative work in robotics.

Sanjiban Choudhury, an Indian American robotics researcher, has been awarded the National Science Foundation (NSF) Faculty Early Career Development Award for his groundbreaking efforts in developing robots that learn new skills similarly to humans. Choudhury, who serves as an assistant professor of computer science at Cornell University’s Ann S. Bowers College of Computing and Information Science, will utilize the $400,000 award to further his research initiatives.

The NSF award is designed to support early-career faculty members who demonstrate the potential to become academic role models in both research and education. The award also aims to foster advancements within their respective departments or organizations. Each funded project must incorporate an educational component, emphasizing the importance of teaching alongside research.

Choudhury’s research focuses on creating robots that can assist in various environments, including homes, hospitals, and farms. While many existing robots are limited to pre-programmed tasks, they often struggle to adapt to new situations or learn from human interactions. Choudhury’s innovative project seeks to overcome these limitations by developing robot helpers capable of learning new skills through observation, practice, and feedback.

The implications of Choudhury’s work could significantly enhance the functionality and adaptability of robots, enabling them to tackle more complex real-world challenges. His research not only aims to improve robotic assistance in everyday tasks but also seeks to deepen our understanding of how robots can learn and adapt to their environments.

In addition to his research, Choudhury’s project includes educational programs designed to engage K-12 students through interactive robotics activities. By providing accessible online resources, he aims to increase participation in STEM fields and promote interest in robotics research among young learners.

Choudhury’s academic background is impressive. He completed his postdoctoral research at the University of Washington and earned both his Master’s and PhD degrees from Carnegie Mellon University. His undergraduate and Master’s degrees in electrical engineering were obtained from the Indian Institute of Technology, Kharagpur.

Choudhury also leads the Portal group, which focuses on developing everyday robots that are user-friendly and practical for tasks ranging from cooking to cleaning. His commitment to making robotics accessible to a broader audience underscores his dedication to advancing the field.

As robotics continues to evolve, Choudhury’s contributions may pave the way for a future where robots can seamlessly integrate into daily life, providing valuable assistance across various sectors.

According to a press release from Cornell University, Choudhury’s work exemplifies the potential of robotics to enhance human capabilities and improve quality of life.

AI Wearable Technology Aids Stroke Survivors in Regaining Speech

Researchers at the University of Cambridge have developed Revoice, a wearable device that significantly improves communication for stroke survivors suffering from dysarthria.

Losing the ability to speak clearly after a stroke can be a devastating experience. For many survivors, the words remain in their minds, but their bodies struggle to cooperate. This results in speech that is slow, unclear, or fragmented. Known as dysarthria, this condition affects nearly half of all stroke survivors, making everyday communication exhausting and frustrating.

In response to this challenge, scientists at the University of Cambridge have developed a groundbreaking wearable device called Revoice. Designed specifically for individuals with post-stroke speech impairment, Revoice aims to help users communicate naturally without the need for surgery or brain implants.

Dysarthria is a physical speech disorder that can weaken the muscles in the face, mouth, and vocal cords following a stroke. As a result, speech may sound slurred, slow, or incomplete. Many stroke survivors can only articulate a few words at a time, despite knowing exactly what they wish to convey. Professor Luigi Occhipinti notes that this disconnect can lead to profound frustration for those affected. While stroke survivors often work with speech therapists using repetitive drills to improve their communication skills, these exercises can take months or longer to yield results. This prolonged recovery period can leave patients struggling during daily interactions with family, caregivers, and healthcare providers.

Revoice offers a novel approach to addressing these communication barriers. Instead of requiring users to type, track their eye movements, or rely on invasive implants, the device detects subtle physical signals from the throat and neck. Resembling a soft, flexible choker made from breathable, washable fabric, Revoice contains ultra-sensitive textile strain sensors and a small wireless circuit board. When a user silently mouths words, the sensors pick up tiny vibrations in the throat muscles. Simultaneously, the device measures pulse signals in the neck to gauge the user’s emotional state.

The device processes these signals using two artificial intelligence (AI) agents, enabling Revoice to convert a few mouthed words into fluent speech in real-time. Previous silent speech systems faced significant limitations, often tested only on healthy volunteers and requiring users to pause for several seconds between words, which disrupted the flow of conversation. Revoice overcomes these delays by employing an AI-driven throat sensor system paired with a lightweight language model. This efficient model consumes minimal power and delivers near-instantaneous responses, powered by a 1,800 mWh battery that researchers anticipate will last a full day on a single charge.

After refining the system with healthy participants, researchers conducted tests with five stroke patients suffering from dysarthria. The results were striking. In one instance, a patient mouthed the phrase “We go hospital,” and Revoice expanded it into a complete sentence that conveyed urgency and frustration, based on the emotional signals and context. Participants reported a 55% increase in communication satisfaction, stating that the device helped them communicate as fluently as they did prior to their stroke.

Researchers believe that Revoice could also benefit individuals with Parkinson’s disease and motor neuron disease. Its comfortable, washable design makes it suitable for daily wear, allowing it to integrate seamlessly into users’ routines rather than being confined to clinical settings. However, before widespread adoption can occur, larger clinical trials are necessary. The research team plans to initiate broader studies with native English-speaking patients and aims to expand the system to support multiple languages and a wider range of emotional expressions. The findings of this research were published in the journal Nature Communications.

For those who have experienced a stroke or have loved ones who have, this research indicates a significant shift in recovery tools. Revoice suggests that effective speech assistance does not need to be invasive. A wearable solution could support communication during the challenging months of rehabilitation, a time when confidence and independence often wane. Additionally, it may alleviate stress for caregivers who struggle to understand incomplete or unclear speech. Clear communication can enhance medical care, emotional well-being, and daily decision-making.

Communication is closely tied to dignity and independence. For stroke survivors, losing the ability to speak can be one of the most difficult aspects of recovery. Revoice exemplifies how artificial intelligence and wearable technology can collaborate to restore something fundamentally human. While it is still in the early stages, this device represents a meaningful step toward making recovery feel less isolating and more hopeful.

If a simple wearable could help restore natural speech, should it become a standard part of stroke rehabilitation? The potential impact of Revoice on the lives of stroke survivors and their families is profound, and further exploration of this technology may pave the way for a new era in speech recovery.

According to Fox News, the advancements made with Revoice could redefine the rehabilitation process for countless individuals affected by speech impairments.

Researchers Identify Source of Black Hole’s 3,000-Light-Year Jet Stream

A new study connects the M87 black hole to its powerful cosmic jet, revealing how it launches particles at nearly the speed of light.

A recent study has established a link between the renowned M87 black hole—the first black hole ever imaged—and its formidable cosmic jet. This research sheds light on how black holes can launch particles at speeds approaching that of light.

Using significantly enhanced coverage from the global Event Horizon Telescope, scientists have traced a cosmic jet that extends 3,000 light-years from the M87 black hole to its probable source. The findings, published in the journal Astronomy & Astrophysics this week, could provide crucial insights into the origins and mechanisms behind the vast cosmic jets emitted by black holes.

Located in the Messier 87 galaxy approximately 55 million light-years from Earth, M87 is a supermassive black hole that is 6.5 billion times the mass of the sun. The first image of this black hole was unveiled to the public in 2019, following data collection by the Event Horizon Telescope in 2017.

Dr. Padi Boyd of NASA highlighted the significance of M87, stating in a video about the discovery that not only is the black hole supermassive, but it is also active. “Just a few percent are active at any given time,” she explained. “Are they turning on and then turning off? That’s an idea… We know there are very high magnetic fields that launch a jet. This image provides observational evidence that what we’ve been seeing for a while is actually being launched by a jet connected to that supermassive black hole at the center of M87.”

M87 is known for both consuming surrounding gas and dust while simultaneously ejecting powerful jets of charged particles from its poles, which form the jet stream, as reported by Scientific American and Space.com.

Saurabh, the team leader at the Max Planck Institute for Radio Astronomy, remarked on the implications of the study, stating, “This study represents an early step toward connecting theoretical ideas about jet launching with direct observations.” He further noted, “Identifying where the jet may originate and how it connects to the black hole’s shadow adds a key piece to the puzzle and points toward a better understanding of how the central engine operates.”

The Event Horizon Telescope is a collaborative network of eight radio observatories that work together to detect radio waves emitted by astronomical objects, such as galaxies and black holes. This network effectively creates an Earth-sized telescope, allowing for unprecedented observations of these distant phenomena. The term “Event Horizon” refers to the boundary of a black hole beyond which light cannot escape, as defined by the National Science Foundation.

The findings were derived from data collected by the Event Horizon Telescope in 2021. However, the authors of the study cautioned that while the results are robust under the assumptions and tests performed, definitive confirmation and more precise constraints will necessitate future observations with the Event Horizon Telescope. These future observations would require higher sensitivity, improved intermediate-baseline coverage through additional stations, and an expanded frequency range.

As researchers continue to explore the mysteries of black holes, this study marks a significant advancement in understanding the dynamics of cosmic jets and their connection to supermassive black holes like M87, paving the way for future discoveries in the field of astrophysics.

According to Space.com, the implications of this research extend beyond mere observation, potentially reshaping our understanding of black hole behavior and the fundamental processes that govern these enigmatic cosmic entities.

Indian-American Raj Badhwar Appointed CIO at SPA

Indian American Raj Badhwar has been appointed Chief Information Officer at Systems Planning & Analysis, where he will enhance technology capabilities for national security missions.

Indian American IT leader Raj Badhwar has joined Systems Planning & Analysis (SPA), a prominent provider of data-driven analytical insights for national security programs, as Chief Information Officer (CIO). The company is based in Alexandria, Virginia.

Badhwar is now a member of SPA’s Executive Leadership Team and reports directly to Chief Executive Officer Rich Sawchak, according to a recent company announcement.

In his role as CIO, Badhwar will oversee SPA’s enterprise information technology (IT) organization. His responsibilities encompass digital strategy, architecture, engineering, operations, data management, and business intelligence. He aims to deliver secure, resilient, and scalable technology solutions while enhancing cybersecurity platforms in collaboration with SPA’s business and mission teams.

“Raj brings deep expertise in cybersecurity, cloud, and enterprise IT that will be critical as SPA continues to grow and support increasingly complex national security missions,” Sawchak stated. “His leadership will help ensure our technology remains secure, modern, and aligned with both our customers’ needs and our long-term strategy.”

Badhwar’s immediate priorities include bolstering technology capabilities that support SPA’s national security clients, improving efficiency and scalability within the IT organization, and ensuring that technology investments are in line with mission delivery, business growth, and acquisition activities.

“My work at SPA will center on ensuring technology directly supports mission outcomes for our national security customers,” Badhwar explained. “That means strengthening security and resilience, simplifying operations as we scale, and advancing our cloud, data, and cybersecurity capabilities in a disciplined and trusted way.”

With over 30 years of experience leading secure technology and cybersecurity organizations across various sectors, including engineering, defense, financial services, and cloud platforms, Badhwar is well-equipped to help SPA establish a secure, cloud-enabled, and data-driven technology foundation for future national security missions.

Badhwar holds a master’s degree in information systems technology from George Washington University and a bachelor’s degree in electrical and electronics engineering from Karnatak University in Dharwad, India.

The information regarding Badhwar’s appointment was reported by The American Bazaar.

Major U.S. Shipping Platform Exposed Customer Data to Hackers

Hackers are increasingly targeting global shipping technology, exposing vulnerabilities that could lead to significant cargo theft and supply chain disruptions.

In recent months, cybersecurity experts have raised alarms about the growing threat of hackers targeting the technology that underpins global shipping. This trend has shifted the focus of cargo theft from traditional methods, such as stolen trucks and forged paperwork, to sophisticated cyberattacks that manipulate logistics systems managing goods worth millions of dollars.

One notable incident involves Bluspark Global, a New York-based shipping technology provider. Its Bluvoyix platform is utilized by numerous companies to manage and track freight worldwide. Although Bluspark is not a household name, its software plays a crucial role in the operations of major retailers, grocery chains, and manufacturers.

For several months, Bluspark’s systems reportedly contained significant security vulnerabilities that left its platform exposed to potential attackers on the internet. The company acknowledged that five vulnerabilities were eventually addressed, including the use of plaintext passwords and the ability to remotely access and interact with the Bluvoyix platform. These flaws could have allowed hackers to access decades of shipment records and sensitive customer data.

While Bluspark claims that these issues have been resolved, the timeline leading up to the fixes raises serious concerns about the duration of the platform’s vulnerability and the challenges in notifying the company about the issues.

Security researcher Eaton Zveare discovered the vulnerabilities in October while examining a Bluspark customer’s website. What began as a routine review of a contact form quickly escalated into a deeper investigation. By analyzing the website’s source code, Zveare found that messages sent through the form were processed via Bluspark’s servers using an application programming interface (API).

As Zveare delved further, he uncovered that the API’s documentation was publicly accessible and included a feature that allowed anyone to test commands. Despite claims that authentication was necessary, the API returned sensitive data without requiring any login credentials. Zveare was able to extract extensive user account information, including employee and customer usernames and passwords stored in plaintext.

Even more alarming, the API permitted the creation of new administrator-level accounts without adequate security checks. This meant that an attacker could potentially gain full access to the Bluvoyix platform and view shipment data dating back to 2007. Security tokens intended to restrict access could also be bypassed entirely.

Perhaps the most troubling aspect of this situation is not just the vulnerabilities themselves, but the difficulty Zveare faced in getting them addressed. After discovering the flaws, he spent weeks attempting to contact Bluspark through emails, voicemails, and LinkedIn messages, all to no avail.

With no clear process for disclosing vulnerabilities, Zveare eventually sought assistance from Maritime Hacking Village, an organization that helps researchers notify companies in the shipping and maritime sectors. When that effort failed, he turned to the media as a last resort. It was only after engaging the press that Bluspark responded, albeit through its legal counsel.

Following the media coverage, Bluspark confirmed that it had patched the vulnerabilities and announced plans to establish a formal vulnerability disclosure program. However, the company has not disclosed whether it found evidence that attackers exploited these bugs to manipulate shipments, stating only that there was no indication of customer impact. Additionally, Bluspark declined to provide details about its security practices or any third-party audits.

The incident underscores the reality that hackers can infiltrate shipping and logistics platforms without users ever realizing their data has been compromised. As a precaution, experts recommend several steps to mitigate risks associated with such attacks.

After a supply chain breach, criminals often send phishing emails or texts impersonating shipping companies, retailers, or delivery services. If you receive a message urging you to click a link or “confirm” shipment details, take a moment to verify its authenticity by visiting the retailer’s website directly.

Moreover, if attackers gain access to customer databases, they may attempt to use the same login credentials across various platforms. Utilizing a password manager can help ensure that each account has a unique password, preventing a single breach from compromising multiple accounts.

It is also advisable to check whether your email has been exposed in previous breaches. Many password managers include built-in breach scanners that can alert you if your email address or passwords have appeared in known leaks. If you find a match, promptly change any reused passwords and secure those accounts with new, unique credentials.

Given that criminals often combine data from different breaches with information gathered from data broker sites, personal data removal services can help minimize the amount of publicly available information about you. While no service can guarantee complete removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind.

Additionally, strong antivirus software can block malicious links, fake shipping pages, and malware-laden attachments that often follow high-profile breaches. Keeping real-time protection enabled is crucial for safeguarding personal information and digital assets.

Implementing two-factor authentication (2FA) can significantly enhance account security, making it much harder for attackers to take over accounts even if they have obtained your password. It is essential to prioritize 2FA for email, shopping accounts, cloud storage, and any service that stores payment or delivery information.

In the aftermath of such incidents, it is also wise to monitor online shopping accounts for unfamiliar orders, address changes, or saved payment methods that you do not recognize. Early detection can prevent fraud from escalating.

Identity theft protection services can alert you to suspicious credit activity and assist in recovery if attackers access your personal details. These services monitor personal information, such as Social Security numbers and email addresses, and can notify you if they are being sold on the dark web or used to open new accounts.

In light of this incident, companies that rely on shipping and logistics platforms should take this as a reminder to review vendor access controls. Limiting administrative permissions, regularly rotating API keys, and ensuring vendors have a clear vulnerability disclosure process are critical steps in enhancing supply chain security.

As shipping platforms operate at the intersection of physical goods and digital systems, they remain attractive targets for cybercriminals. When basic protections like authentication and password encryption are absent, the consequences can extend beyond digital breaches, leading to stolen cargo and significant disruptions in the supply chain.

The incident involving Bluspark Global highlights the urgent need for companies to adopt robust security measures and establish transparent processes for reporting vulnerabilities. As the threat landscape continues to evolve, it is imperative for organizations to remain vigilant in protecting their systems and customer data.

For further insights on cybersecurity and data protection, please refer to CyberGuy.com.

Spectacular Blue Spiral Light in Night Sky Likely from SpaceX Rocket

A stunning blue spiral light, likely from a SpaceX Falcon 9 rocket, illuminated the night sky over Europe on Monday, captivating viewers and sparking social media excitement.

A mesmerizing blue light spiraled through the night sky over Europe on Monday, captivating onlookers and igniting discussions across social media platforms. Experts suggest that this striking phenomenon was caused by the SpaceX Falcon 9 rocket booster re-entering the Earth’s atmosphere.

Time-lapse footage captured from Croatia around 4 p.m. EST (9 p.m. local time) showcased the glowing spiral, which many observers likened to a cosmic whirlpool or a spiral galaxy. The full video, recorded at normal speed, lasts approximately six minutes, providing a stunning visual of the event.

The U.K.’s Met Office reported receiving numerous accounts of an “illuminated swirl in the sky,” confirming that it was likely related to the SpaceX rocket launch from Cape Canaveral, Florida. The Falcon 9 rocket lifted off at around 1:50 p.m. EST as part of the classified NROL-69 mission for the National Reconnaissance Office (NRO), the U.S. government’s intelligence and surveillance agency.

“This is likely to be caused by the SpaceX Falcon 9 rocket, launched earlier today,” the Met Office stated on X (formerly Twitter). “The rocket’s frozen exhaust plume appears to be spinning in the atmosphere and reflecting sunlight, which causes it to appear as a spiral in the sky.”

This glowing spectacle is a phenomenon often referred to as a “SpaceX spiral,” according to Space.com. Such spirals typically occur when the upper stage of a Falcon 9 rocket separates from its first-stage booster. As the upper stage continues its ascent into space, the lower stage descends back to Earth, releasing any remaining fuel. The fuel then freezes almost instantly at high altitudes, and sunlight reflects off the frozen particles, creating the striking visual effect.

Fox News Digital reached out to SpaceX for further comment but did not receive an immediate response. The timing of Monday’s celestial display was notable, as it followed closely on the heels of a successful SpaceX mission that saw a team working with NASA return two stranded astronauts from space.

The captivating blue spiral not only delighted viewers but also underscored the intricate and often dramatic nature of space exploration and rocket launches. As SpaceX continues to push the boundaries of aerospace technology, such visual phenomena are likely to become more common, further enchanting audiences around the globe.

According to Space.com, the occurrence of these spirals is a fascinating byproduct of modern rocket launches, blending science and spectacle in the night sky.

Philanthropists Chandrika and Ranjan Tandon Fund $11 Million AI School at IIM Ahmedabad

The Indian Institute of Management Ahmedabad has partnered with philanthropists Chandrika and Ranjan Tandon to establish a new school focused on artificial intelligence, supported by an $11 million endowment.

NEW DELHI – The Indian Institute of Management Ahmedabad (IIMA) has entered into a Memorandum of Understanding with philanthropist and alumna Chandrika Krishnamurthy Tandon and her husband, Ranjan Tandon, to create the Krishnamurthy Tandon School of Artificial Intelligence. This initiative is backed by a substantial endowment of ₹100 crore, equivalent to approximately $11 million.

The agreement was formalized in New Delhi, with Union Education Minister Dharmendra Pradhan in attendance. India’s Ambassador to the United States, Vinay Kwatra, participated in the event virtually.

The newly proposed school will function as a specialized center within IIMA, focusing on artificial intelligence at the intersection of technology, management, and public policy. According to a statement, the school will emphasize real-world applications and societal impact.

During the event, Minister Pradhan highlighted that this agreement is in line with preparations for the upcoming India–AI Impact Summit 2026. He noted that the initiative reflects ongoing efforts under Prime Minister Narendra Modi to enhance India’s global standing in the field of artificial intelligence. Pradhan emphasized that India’s advancements in AI will rely heavily on robust institutions and skilled human capital, in addition to technological capabilities.

The minister also praised the philanthropic efforts of the Tandon family, stating that alumni-led initiatives play a crucial role in strengthening academic institutions and expanding national capacity in emerging technologies.

The Krishnamurthy Tandon School of Artificial Intelligence aims to serve as a hub for collaboration among faculty, industry leaders, policymakers, and global partners. Its mission will include the development of application-led and case-based AI research, with a strong focus on translating research findings into practical solutions for business, governance, and social sectors.

Among those present at the signing ceremony were Higher Education Secretary Dr. Vineet Joshi, IIMA Director Prof. Bharat Bhasker, Joint Secretary (Higher Education) Purnendu Banerjee, and other senior representatives from the ministry.

This significant investment in education and technology underscores the growing importance of artificial intelligence in India and reflects a commitment to fostering innovation and leadership in this critical field, according to India West.

Under Armour Data Breach Affects Millions of Users Worldwide

Under Armour is investigating a significant data breach affecting approximately 72 million customers, following the online posting of sensitive records by hackers.

Sportswear and fitness brand Under Armour is currently probing claims of a substantial data breach after customer records were discovered on a hacker forum. The breach came to light when millions of users received alerts indicating that their personal information may have been compromised.

While Under Armour maintains that its investigation is ongoing, cybersecurity experts analyzing the leaked data suggest it contains personal details that could be linked to customer purchases. The breach notification service Have I Been Pwned reported that the dataset includes email addresses associated with around 72 million individuals, prompting the organization to directly notify affected users.

The scale of this exposure has raised significant concerns regarding the potential misuse of consumer data long after a breach has occurred. The stolen data is reportedly tied to a ransomware attack that took place in November 2025, for which the Everest ransomware group claimed responsibility. This group attempted to extort Under Armour by threatening to leak internal files.

In January 2026, customer data from this incident surfaced on a popular hacking forum. Shortly thereafter, Have I Been Pwned obtained a copy of the data and began alerting affected users via email. Reports indicate that the seller claimed the stolen files originated from the November breach and included millions of customer records.

The leaked dataset is believed to encompass a wide range of personal information. While there has been no confirmation regarding the exposure of payment card details, the data remains highly valuable to cybercriminals. Compromised information may include names, email addresses, birth dates, and purchase histories, which can be exploited to create convincing scams.

Researchers have also identified email addresses belonging to Under Armour employees within the leaked data, increasing the risk of targeted phishing and business email compromise scams. An Under Armour spokesperson stated, “We are aware of claims that an unauthorized third party obtained certain data. Our investigation of this issue, with the assistance of external cybersecurity experts, is ongoing. Importantly, at this time, there’s no evidence to suggest this issue affected UA.com or systems used to process payments or store customer passwords. Any implication that sensitive personal information of tens of millions of customers has been compromised is unfounded. The security of our systems and data is a top priority for UA, and we take this issue very seriously.”

Even in the absence of passwords or payment details, this breach poses serious risks. Names, email addresses, birth dates, and purchase histories can be used to craft highly convincing phishing attempts. Cybercriminals often reference actual purchases or account details to gain the trust of their targets. Consequently, phishing emails related to this breach may appear legitimate and urgent.

Over time, exposed data can be combined with information from other breaches to create detailed identity profiles that are increasingly difficult to protect against. To determine if your email has been affected, visit the Have I Been Pwned website, which serves as the official source for this newly added dataset. Enter your email address to check if your information appears in the leak.

If you received a breach alert or suspect your information may be included, taking immediate action can help mitigate future risks. If you have reused the same password across multiple sites, it is advisable to change those passwords promptly. Even if Under Armour asserts that passwords were not compromised, exposed email addresses can be used in follow-up attacks.

Utilizing a password manager can simplify this process by generating strong, unique passwords for each account and securely storing them. This way, a single breach cannot jeopardize multiple accounts. Additionally, check if your email has been exposed in previous breaches. Many password managers now include a built-in breach scanner that verifies whether your email address or passwords have appeared in known leaks. If you find a match, change any reused passwords immediately and secure those accounts with new, unique credentials.

Cybercriminals often act swiftly following a breach. As a result, emails that seem to originate from Under Armour or other fitness brands may appear in your inbox. Exercise caution with messages claiming there is an issue with your account or a recent purchase. Avoid clicking links or opening attachments in unexpected emails; instead, visit the company’s official website directly if you need to verify your account.

Employing robust antivirus software can also help block malicious links and attachments before they can cause harm. To protect yourself from harmful links that may install malware and potentially access your private information, ensure you have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, safeguarding your personal information and digital assets.

Implementing two-factor authentication (2FA) adds an additional layer of security. Even if someone obtains your password, they would still require a second step to log in. Start by enabling 2FA for your email accounts, then extend it to shopping, fitness, and financial accounts. This simple measure can prevent many account takeover attempts linked to breached data.

After a breach, attackers frequently test stolen email addresses across various sites, which can trigger password reset emails that you did not request. Pay close attention to these alerts. If you receive one, secure the account immediately by changing the password and reviewing recent activity.

The Under Armour data breach serves as a reminder that even major global brands can become targets. While payment systems appear unaffected, the exposure of personal data still presents long-term risks for millions of customers. Data breaches often unfold over time, and what begins as leaked records can later fuel scams, identity theft, and targeted attacks. Remaining vigilant now can help reduce the likelihood of more significant issues in the future.

For further information, visit Cyberguy.com, where you can find expert-reviewed password managers, antivirus solutions, and data removal services to help protect your personal information.

According to CyberGuy, the Under Armour data breach highlights the ongoing risks associated with data security in the digital age.

Elon Musk Considers Company Merger Ahead of SpaceX IPO

Elon Musk is considering a merger of his companies, including SpaceX and xAI, as the rocket manufacturer prepares for a significant IPO this year.

Elon Musk, the CEO of Tesla, is reportedly exploring the possibility of merging his various companies, including SpaceX and xAI. This move comes in the wake of his decision to utilize Tesla funds to support xAI, raising questions among investors about the potential synergies between Musk’s ventures in space exploration, autonomous driving, and artificial intelligence.

According to a report by Bloomberg, SpaceX is in discussions regarding a merger with Tesla, Musk’s electric vehicle company. Gene Munster, a Tesla shareholder and managing partner at xAI investor Deepwater Asset Management, expressed optimism about the merger’s likelihood, stating, “I think it’s highly likely that (xAI) ends up with one of the two parties.”

As SpaceX prepares for a major public offering scheduled for this year, the potential merger with xAI could consolidate Musk’s diverse portfolio, which includes rockets, Starlink satellites, the X social media platform, and the Grok chatbot. This consolidation could streamline operations and enhance strategic coherence across Musk’s enterprises, according to sources familiar with the discussions and regulatory filings.

Dennis Dick, chief market strategist at Stock Trader Network, commented on Musk’s expansive business interests, noting, “Musk has too many separate companies. A major risk thesis for Tesla is that Musk is spreading himself out too much. As a Tesla shareholder, I applaud further consolidation.”

If the merger between SpaceX and xAI proceeds, it is expected that xAI shares would be exchanged for SpaceX shares. This consolidation could represent a significant shift in how Musk manages his extensive business empire, potentially allowing for greater integration of technologies developed across his various companies.

By centralizing operations, Musk could accelerate innovation and streamline decision-making processes, reducing redundancies in research, development, and operations. For investors, a unified structure may clarify growth prospects and simplify valuations, addressing concerns about Musk’s divided attention among multiple high-profile ventures.

From a competitive standpoint, merging these assets could strengthen SpaceX’s position in emerging technology markets, particularly in artificial intelligence and autonomous systems. By aligning expertise, talent, and technological capabilities under one organizational umbrella, Musk may be better equipped to tackle ambitious projects that span multiple industries, including aerospace, defense, and AI-driven commercial applications.

Incorporating xAI into SpaceX’s operations could also enhance the company’s prospects for securing contracts with the Pentagon, which has been actively seeking to increase AI adoption within military networks. Caleb Henry, an analyst at Quilty Analytics, highlighted this potential advantage, noting that the merger could position SpaceX favorably in the defense sector.

However, merging different corporate cultures, compliance requirements, and financial structures could pose challenges. If not managed carefully, these complexities could create friction or slow down execution, impacting both short-term performance and long-term strategic outcomes. How Musk navigates these challenges will likely play a crucial role in the success of the merger.

Ultimately, the potential consolidation of Musk’s companies reflects his ambition to create a cohesive ecosystem of interrelated technologies. This strategy could position SpaceX and his other ventures for a new era of innovation and market influence, although the outcome remains uncertain and contingent upon regulatory approvals, investor support, and effective execution.

The broader implications of such a merger could reshape investor perceptions of Musk’s ventures, potentially attracting capital from those interested in a unified tech ecosystem. Market reactions may vary based on the effectiveness of the integration process, and analysts will likely debate whether the potential synergies outweigh the risks associated with overconcentration. Additionally, this move could prompt competitors to reevaluate their strategies, considering partnerships or mergers to remain competitive in overlapping sectors.

As the situation develops, stakeholders will be closely monitoring Musk’s next steps and the potential impact on the tech landscape.

According to Bloomberg, the discussions surrounding the merger are ongoing, and the final outcome will depend on various factors, including regulatory approvals and investor sentiment.

Humanoid Robot Designs Building, Making Architectural History

Ai-Da Robot has made history as the first humanoid robot to design a building, presenting a modular housing concept for future lunar and Martian bases at the Utzon Center in Denmark.

At the Utzon Center in Denmark, Ai-Da Robot, recognized as the world’s first ultra-realistic robot artist, has achieved a groundbreaking milestone by becoming the first humanoid robot to design a building. The project, titled Ai-Da: Space Pod, introduces a modular housing concept intended for future bases on the Moon and Mars.

This innovative endeavor marks a significant shift in Ai-Da’s capabilities, moving from creating art to conceptualizing physical spaces for both humans and robots. Previously, Ai-Da garnered attention for her work in drawing, painting, and performance art, which sparked global discussions about the role of robots in creative fields.

The exhibition “I’m not a robot,” currently on display at the Utzon Center, runs through October and delves into the creative potential of machines. As robots increasingly demonstrate the ability to think and create independently, visitors to the exhibition can engage with Ai-Da’s drawings, paintings, and architectural designs. The exhibition also features a glimpse into Ai-Da’s creative process through sketches, paintings, and a video interview.

Ai-Da is not merely a digital avatar or animation; she possesses camera eyes, advanced AI algorithms, and a robotic arm that enables her to draw and paint in real time. Developed in Oxford and constructed in Cornwall in 2019, Ai-Da’s versatility spans multiple disciplines, including painting, sculpture, poetry, performance, and now architectural design.

Aidan Meller, the creator of Ai-Da and Director of Ai-Da Robot, explains the significance of the Space Pod concept. “Ai-Da presents a concept for a shared residential area called Ai-Da: Space Pod, foreshadowing a future where AI becomes an integral part of architecture,” he states. “With intelligent systems, a building will be able to sense and respond to its occupants, adjusting light, temperature, and digital interfaces according to needs and moods.”

The Space Pod design is intentionally modular, allowing each unit to connect with others through corridors, fostering a shared residential environment. Ai-Da’s artistic vision includes a home and studio suitable for both humans and robots. According to her team, these designs could evolve into fully realized architectural models through 3D renderings and construction, potentially adapting to planned Moon or Mars base camps.

While the concept primarily targets future extraterrestrial bases, it is also feasible to create a prototype on Earth. This aspect is particularly relevant as space agencies prepare for extended missions beyond our planet. Meller emphasizes the timeliness of the project, noting, “With our first crewed Moon landing in 50 years scheduled for 2027, Ai-Da: Space Pod is a simple unit connected to other Pods via corridors.” He adds, “Ai-Da is a humanoid designing homes, which raises questions about the future of architecture as powerful AI systems gain greater agency.”

The exhibition aims to provoke thought and discomfort regarding the rapid pace of technological advancement. Meller points to developments in emotional recognition through biometric data, CRISPR gene editing, and brain-computer interfaces, each carrying both promise and ethical risks. He references dystopian themes from literature, such as Aldous Huxley’s “Brave New World,” and cautions about the potential misuse of powerful technologies.

Line Nørskov Davenport, Director of Exhibitions at the Utzon Center, describes Ai-Da as a “confrontational” figure, stating, “The very fact that she exists is confrontational. Ai-Da is an AI shaker, a conversation starter.” This exhibition transcends the realms of robotics and space exploration, highlighting the swift transition of AI from a creative tool to a decision-maker in architecture and housing.

As AI begins to influence the design of living spaces, critical questions about control, ethics, and accountability arise. If a robot can conceptualize homes for the Moon, it raises concerns about how such technology might shape building functionality on Earth.

Ai-Da’s work challenges the notion of what is possible for humanoid robots and their role in society. Her presence in a major cultural institution ignites discussions about creativity, technology, and responsibility. As the boundaries between human and machine continue to blur, the implications of AI’s involvement in architecture and design become increasingly significant.

The question remains: if AI can design the homes of our future, how much creative control should humans be willing to relinquish? This inquiry invites ongoing dialogue about the intersection of technology and human creativity.

According to CyberGuy, Ai-Da’s Space Pod serves as a catalyst for critical reflection on the evolving relationship between humans and artificial intelligence.

Wolf Species Extinct for 12,500 Years Resurrected, Company Claims

A Dallas-based company claims to have resurrected the dire wolf, an extinct species that last roamed the Earth over 12,500 years ago, using advanced genetic technologies.

A U.S. company, Colossal Biosciences, has announced a groundbreaking achievement: the revival of the dire wolf, a species that has been extinct for more than 12,500 years. The dire wolf, made famous by the HBO series “Game of Thrones,” is said to have been brought back to life through innovative genome-editing and cloning techniques.

According to Colossal Biosciences, this marks the world’s first successful instance of what they term a “de-extincted animal.” However, some experts have raised concerns, suggesting that the company may have merely genetically modified existing wolves rather than truly resurrecting the extinct apex predator.

Historically, dire wolves roamed the American midcontinent during the Ice Age. The oldest confirmed fossil of a dire wolf, dating back approximately 250,000 years, was discovered in the Black Hills of South Dakota. In “Game of Thrones,” these wolves are portrayed as larger and more intelligent than their modern counterparts, exhibiting fierce loyalty to the Stark family, a central noble house in the series.

Colossal’s project has produced three litters of dire wolves, including two adolescent males named Romulus and Remus, and a female puppy called Khaleesi. The scientists utilized blood cells from a living gray wolf and employed CRISPR technology—short for “clustered regularly interspaced short palindromic repeats”—to make genetic modifications at 20 different sites. According to Beth Shapiro, Colossal’s chief scientist, these modifications were designed to replicate traits believed to have helped dire wolves survive in cold climates during the Ice Age, such as larger body sizes and longer, fuller, light-colored fur.

Of the 20 genome edits made, 15 correspond to genes identified in actual dire wolves. The ancient DNA used in the project was extracted from two fossils: a tooth from Sheridan Pit, Ohio, approximately 13,000 years old, and an inner ear bone from American Falls, Idaho, dating back around 72,000 years.

The genetic material was transferred into an egg cell from a domestic dog, and the embryos were subsequently implanted into surrogate domestic dogs. After a gestation period of 62 days, the genetically engineered pups were born.

Ben Lamm, CEO of Colossal Biosciences, described this achievement as a significant milestone, emphasizing that it represents the first of many examples showcasing the effectiveness of the company’s comprehensive de-extinction technology. “It was once said, ‘any sufficiently advanced technology is indistinguishable from magic,’” Lamm stated. “Today, our team gets to unveil some of the magic they are working on and its broader impact on conservation.”

Colossal Biosciences has previously announced similar initiatives aimed at genetically altering cells from living species to create animals resembling other extinct species, such as woolly mammoths and dodos. In addition to the dire wolves, the company recently reported the birth of two litters of cloned red wolves, which are critically endangered. This development is seen as evidence of the potential for conservation through de-extinction technology.

During a recent announcement, Lamm mentioned that the team had met with officials from the Interior Department in late March regarding their projects. Interior Secretary Doug Burgum praised the work on social media, calling it a “thrilling new era of scientific wonder.” However, some scientists have expressed skepticism about the feasibility of restoring extinct species.

Corey Bradshaw, a professor of global ecology at Flinders University in Australia, voiced concerns about the claims made by Colossal Biosciences. “So yes, they have slightly genetically modified wolves, maybe, and that’s probably the best that you’re going to get,” Bradshaw remarked. “And those slight modifications seem to have been derived from retrieved dire wolf material. Does that make it a dire wolf? No. Does it make a slightly modified gray wolf? Yes. And that’s probably about it.”

Colossal Biosciences asserts that the wolves are currently thriving in a secure 2,000-acre ecological preserve in Texas, which is certified by the American Humane Society and registered with the USDA. Looking ahead, the company plans to restore the species in secure and expansive ecological preserves, potentially on indigenous land.

This ambitious project raises important questions about the future of conservation and the ethical implications of de-extinction efforts. As the debate continues, the work of Colossal Biosciences may pave the way for new approaches to preserving biodiversity.

According to Fox News, the implications of this project extend beyond mere scientific curiosity, potentially influencing conservation strategies for endangered species in the years to come.

Samsung Galaxy S26 Ultra Leaks Reveal February 2026 Launch Details

Leaks suggest that Samsung will unveil its Galaxy S26 series, including the Galaxy S26 Ultra, during a Galaxy Unpacked event on February 25, 2026, with a likely on-sale date in March.

Samsung enthusiasts are gearing up for one of the most significant smartphone launches of 2026, as recent leaks and industry hints indicate a Galaxy Unpacked event scheduled for February 25, 2026. During this event, Samsung is expected to unveil its next-generation Galaxy S26 lineup, which includes the Galaxy S26, Galaxy S26+, and Galaxy S26 Ultra.

Traditionally, Samsung kicks off its flagship smartphone cycle with the Galaxy S series, typically announcing new models in January or February. However, this year’s unveiling appears to be more than a month later than usual, a shift that has generated considerable excitement among fans eager to see what innovations the South Korean tech giant will introduce.

Insider tipster Evan Blass recently shared a leaked invitation on X, confirming the February 25 launch date for the Galaxy Unpacked event. The teaser image also hints at the simultaneous launch of Samsung’s next-generation Galaxy Buds 4 and Buds 4 Pro, making this event a significant occasion for multiple new product introductions. This confirmed date aligns with various recent leaks and supports ongoing rumors regarding the phone’s launch timeline.

The Galaxy S26 series is anticipated to follow a familiar three-model structure: standard, Plus, and Ultra. This return to a traditional format comes after the Galaxy S25 Edge was reportedly dropped due to lackluster sales.

In terms of display and design, all models are expected to feature high-quality AMOLED displays with 120Hz refresh rates, improved brightness, and enhanced viewing angles. Some variants may also incorporate new privacy display technology to protect on-screen content from prying eyes.

Performance-wise, the base Galaxy S26 and S26+ may utilize Samsung’s in-house Exynos 2600 chipset, while the S26 Ultra is likely to be powered by Qualcomm’s Snapdragon 8 Elite Gen 5, a robust flagship processor.

Camera capabilities are also set to receive a significant upgrade, with early reports indicating that the Ultra model will feature a 200-megapixel main sensor. This will be complemented by advanced cropping or zoom solutions and wider aperture lenses designed to enhance low-light photography.

Additionally, leaked information suggests that the entire Galaxy S26 range may support upgraded wireless charging and MagSafe-style accessories through Qi2 compatibility.

While Samsung has yet to officially confirm the launch dates, leaks from various sources, including tipsters like Ice Universe, suggest the following timeline:

Galaxy Unpacked Event: February 25, 2026

Pre-Orders Start: Around February 26

Pre-Sale Period: Early March

Official On-Sale Date: Around March 11, 2026

These dates may vary slightly by region, but the overall trend indicates a late February introduction followed by a March market debut.

As for pricing, the expected costs for the Galaxy S26 series in India are as follows:

The Galaxy S26 is likely to start at around ₹84,999, with a base storage option of 256GB, as the 128GB variant may be discontinued. Higher storage options, such as 512GB, are expected to be priced above the entry-level model.

The Galaxy S26 Plus is anticipated to have a starting price of approximately ₹1,04,999, with the base 256GB variant remaining similar to last year’s model. The 512GB variant is likely to be priced higher than previous Plus models.

For the Galaxy S26 Ultra, the expected starting price is around ₹1,34,999. The 256GB and 512GB versions may be slightly cheaper than their S25 Ultra counterparts, while the 1TB variant is expected to maintain a price similar to last year’s Ultra model.

The delay in the launch of the Galaxy S26 series is noteworthy for fans and potential buyers. Historically, Samsung has unveiled its Galaxy S-series smartphones in late January or early February, as seen with the Galaxy S25 launch in January 2025. This year’s later debut may be attributed to strategic changes in the lineup and product planning.

This delay has heightened anticipation, with fans speculating that Samsung might be fine-tuning hardware upgrades, storage options, and design features. As the February 25 event approaches, more detailed leaks regarding specifications and pricing are expected to surface.

For tech enthusiasts and smartphone buyers, the late February launch offers a compelling reason to postpone upgrades until Samsung’s next flagship arrives. With anticipated improvements across display, chipset, camera, battery, and AI features, the Galaxy S26 series is poised to compete vigorously in the premium smartphone segment.

The introduction of new Galaxy Buds at the same event further enhances the value of the February 25 Unpacked, making it one of the most eagerly awaited tech events of early 2026.

These insights into the upcoming Galaxy S26 series are based on leaks and industry speculation, according to The Sunday Guardian.

Startup Bazaar to Host Events in UAE on January 31 and February 2

The American Bazaar’s Startup Bazaar series will debut in the UAE with events in Abu Dhabi and Dubai, focusing on AI and emerging technologies.

The American Bazaar is set to launch its flagship Startup Bazaar series in the United Arab Emirates, featuring back-to-back events on January 31, 2026, in Abu Dhabi and February 2, 2026, in Dubai. These events aim to unite startup founders, investors, and leaders in the tech ecosystem to explore and showcase innovations in artificial intelligence and other emerging technologies.

Positioned at the intersection of technology, investment, and policy, the Startup Bazaar events promise a vibrant mix of ideas, discussions, and networking opportunities that will help shape the future of AI-driven entrepreneurship.

The Abu Dhabi event will take place on January 31, while the Dubai event is scheduled for February 2. Both events are organized in partnership with Talrop, an India-based technology and innovation company dedicated to fostering startups, developing digital products, and nurturing tech talent across the Gulf Cooperation Council (GCC) region.

These gatherings are expected to attract U.S.-based investors alongside their counterparts from the GCC and India, as well as senior executives and high-growth founders. This diverse mix will facilitate a unique cross-border exchange of insights and perspectives.

As the UAE continues to establish itself as a global hub for advanced technologies, the Startup Bazaar will highlight innovations in AI, deep tech, and other frontier technologies, particularly in the energy, healthtech, and pharmaceutical sectors. These discussions are anticipated to contribute to economic transformation and create tangible impacts in the region.

“The UAE is emerging as one of the most exciting and execution-focused AI startup ecosystems globally,” said Sanjay Puri, a member of the U.S. investor delegation attending the events. “This delegation presents a valuable opportunity to engage with founders, universities, family offices, and industry leaders like G42, exploring how talent, capital, and policy are converging at scale. I am particularly interested in how the region is translating research and ambition into globally competitive AI companies, and I see significant potential for long-term cross-border partnerships and investment.”

Designed to be more than a traditional conference, Startup Bazaar offers an immersive experience for startup founders, technologists, investors, policymakers, corporate innovation leaders, researchers, and professionals. Attendees will have the chance to engage directly with the U.S. delegation, which includes angel investors and AI experts.

A highlight of both events will be the Startup Showcase, where selected startups will pitch their ideas to potential investors. For founders seeking visibility, feedback, and funding opportunities, this showcase serves as a direct gateway to international markets.

As Startup Bazaar makes its debut in Abu Dhabi and Dubai, it not only fosters conversations about innovation but also brings together the people, capital, and ambition necessary to drive future advancements.

For those interested in attending, registration is now open for both the Abu Dhabi and Dubai editions of Startup Bazaar.

According to The American Bazaar, the series promises to be a significant event in the region’s tech landscape.

Dr. Satheesh Kathula Appointed Chair of Board of Directors, Indo-American Press Club

The Indo-American Press Club (IAPC), the largest and most influential organization representing journalists and media professionals of Indian origin across North America, has announced the appointment of Dr. Satheesh Kathula as Chair of its Board of Directors for 2026. A distinguished oncologist, community leader, and immediate past president of the American Association of Physicians of Indian Origin (AAPI), Dr. Kathula brings more than two decades of leadership and public service to this prominent role.

Dr. Kathula has served as a practicing oncologist for nearly 25 years, earning widespread respect for his compassionate care and contributions to the advancement of cancer treatment.

His association with IAPC spans many years. In 2005, he received the organization’s prestigious Leadership Award in recognition of his service and advocacy.

Accepting the new role, Dr. Kathula outlined a bold and forward-looking vision for the organization. “As the Chair of the Indo-American Press Club, I will champion ethical, evidence-based journalism, strengthen Indo–U.S. narratives, and elevate health and science reporting,” he said. Emphasizing modernization and broader engagement, he added, “My focus is on building bridges across cultures, modernizing our digital presence, and expanding our influence beyond ethnic media. With unity, integrity, and responsible innovation at the core, I aim to create a lasting legacy that empowers journalists, informs communities, and positions the Club as a trusted voice of impact.”

Reflecting on the challenges facing media professionals today, Dr. Kathula noted, “These are unprecedented times, especially for journalists and the media, when the very freedom of expression is at risk. At IAPC, we envisage our vision through collective efforts and advocacy activities through our nearly one thousand members across the U.S. and Canada, by being a link between the media fraternity and the world at large.”

Ginsmon Zachariah, Founding Chair of the IAPC Board of Directors, highlighted the broader mission of the organization. “Our homeland India is known to have a vibrant, active, and free media, which plays a vital role in the functioning of the world’s largest democracy,” he said. “As members of the media in our adopted land, we recognize our responsibility to be a source of effective communication. We have a role to play in shaping a just and equitable world where everyone enjoys freedom and liberty.”

Providing historical context, Ajay Ghosh, Founding President of IAPC, reflected on the organization’s origins. “We as individuals and corporations representing print, visual, electronic, and online media realized that we had a greater role to play,” he said. “For decades, many of us stood alone in a vast media landscape, our voices often drowned out. IAPC was formed to fill this vacuum—a common platform to raise our collective voice, pool our talents, and respond cohesively to the challenges of the modern world.”

A graduate of Siddhartha Medical College in Vijayawada, Andhra Pradesh, Dr. Kathula currently serves as a clinical professor of medicine at Wright State University’s Boonshoft School of Medicine in Dayton, Ohio. Dr. Kathula completed a Global Healthcare Leaders Program at Harvard University.He also holds a certificate in Artificial Intelligence in Healthcare from Stanford University and is a Diplomate of the American Board of Lifestyle Medicine.

He has authored several medical papers and published a book “Immigrant Doctors: Chasing the Big American Dream” highlighting the contribution of immigrant doctors, their struggles and triumphs. It is Amazon’s best selling. He embarked on his second book on cancer awareness for general public.

Dr. Kathula’s professional achievements extend far beyond medicine. Dr. Kathula’s commitment to community service is equally noteworthy. He has led bone marrow donor drives to address the severe shortage of South Asian donors and was named “Man of the Year – 2018” by the Leukemia and Lymphoma Society for raising funds to help fund the research to find newer treatments and cures for blood cancers.

His commitment to community service is equally noteworthy. His philanthropic work in India includes establishing the Pathfinder Institute of Pharmacy and Educational Research (PIPER) in Warangal, Telangana, which has already graduated more than 1,000 students. He has also supported medical camps and donated essential infrastructure—including a defibrillator, water purification system, CPR center and library—to his native community.He has also supported medical camps and donated essential infrastructure—including a defibrillator, water purification system, and library—to his native community.

Dr. Kathula has served AAPI in numerous leadership roles, including Regional Director, Trustee, Treasurer, Secretary, Vice President, and President-Elect before assuming the presidency in July 2024.

Dr. Kathula has received numerous honors, including the U.S. Presidential Lifetime Achievement Award. In December 2024, he was honored with the Inspirational Award by the Raising Awareness of Youth with Autism (RAYWA) Foundation at a gala held at New York’s iconic Pierre Hotel. In May 2025, IAPC itself bestowed upon him its Lifetime Achievement Award.

Founded in 2013, the Indo-American Press Club continues to serve as a unifying platform for journalists of Indian origin, fostering collaboration, professionalism, and a commitment to the public good. More information is available at www.indoamericanpressclub.com..

Tiny Autonomous Robots Achieve Independent Swimming Capability

Researchers have developed the smallest fully programmable autonomous robots capable of swimming, potentially transforming medicine and healthcare.

For decades, the concept of microscopic robots has largely existed in the realm of science fiction. Films like “Fantastic Voyage” fueled our imaginations, suggesting that tiny machines could one day navigate the human body to repair ailments from within. However, this vision remained elusive, primarily due to the constraints imposed by physics.

Now, a significant breakthrough from researchers at the University of Pennsylvania and the University of Michigan has altered this narrative. The teams have successfully created the smallest fully programmable autonomous robots to date, and these innovative machines can swim.

Measuring approximately 200 by 300 by 50 micrometers, these robots are smaller than a grain of salt and comparable in size to a single-celled organism. Unlike traditional robots that rely on legs or propellers for movement, these microscopic machines utilize electrokinetics. Each robot generates a small electrical field that attracts charged ions in the surrounding fluid, effectively creating a current that propels the robot forward without any moving parts. This design not only enhances durability but also simplifies handling with delicate laboratory tools.

Each robot is powered by tiny solar cells that produce just 75 nanowatts of energy—over 100,000 times less than what a smartwatch consumes. To achieve this level of efficiency, engineers had to redesign various components, including ultra-low voltage circuits and a custom instruction set that condenses complex behaviors into a few hundred bits of memory. Despite these limitations, each robot is capable of sensing its environment, storing data, and making decisions about its next movements.

Due to their size, the robots cannot accommodate antennas. Instead, the research team drew inspiration from nature, enabling each robot to perform a specific wiggle pattern to convey information, such as temperature. This motion follows a precise encoding scheme that researchers can interpret by observing the robots under a microscope. This method of communication is reminiscent of how bees convey messages through movement. Programming the robots is equally innovative; researchers use light signals that the robots interpret as instructions, with a built-in passcode to prevent interference from random light sources.

In current experiments, the robots exhibit thermotaxis, meaning they can sense heat and swim autonomously toward warmer areas. This capability suggests promising future applications, such as tracking inflammation, identifying disease markers, or delivering drugs with pinpoint accuracy. While light can already power these robots near the skin, researchers are also investigating ultrasound as a potential energy source for deeper environments.

Thanks to their construction using standard semiconductor manufacturing techniques, these robots can be produced en masse. More than 100 robots can fit on a single chip, and manufacturing yields have already surpassed 50%. In large-scale production, the estimated cost could drop below one cent per robot, making the concept of disposable robot swarms a tangible reality.

This technology is not merely about creating flashy gadgets; it represents a significant advancement in scalability. Robots of this size could one day monitor health at the cellular level, construct materials from the ground up, or explore environments that are too fragile for larger machines. Although practical medical applications are still years away, this breakthrough indicates that true autonomy at the microscale is finally within reach.

For nearly half a century, the promise of microscopic robots has felt like a dream that science could never fully realize. However, this research, published in Science Robotics, marks a pivotal shift. By embracing the unique physics of the microscale rather than resisting it, engineers have unlocked an entirely new class of machines. This is just the beginning, but it represents a significant leap forward. As sensing, movement, and decision-making capabilities are integrated into these nearly invisible robots, the future of robotics is poised to look remarkably different.

As we consider the potential of tiny robots swimming through our bodies, the question arises: would we trust them to monitor our health or deliver treatment? This inquiry invites further exploration into the future of healthcare technology.

According to Science Robotics, the implications of this research could extend far beyond initial expectations, paving the way for revolutionary advancements in medical science.

Google Uses AI to Decode Dolphin Communication

Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate human interaction with these intelligent marine mammals.

Google is embarking on an innovative project that harnesses artificial intelligence (AI) to explore the complexities of dolphin communication, with the ultimate aspiration of enabling humans to converse with these remarkable creatures.

Dolphins are widely recognized as some of the most intelligent animals on the planet, celebrated for their emotional depth and social interactions with humans. In collaboration with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP)—a Florida-based non-profit dedicated to studying dolphin sounds for over four decades—Google is developing a new AI model named DolphinGemma.

The Wild Dolphin Project has spent years correlating various dolphin sounds with specific behavioral contexts. For example, signature whistles are commonly used by mothers to locate their calves, while burst pulse “squawks” are often associated with aggressive encounters among dolphins. Additionally, “click” sounds are frequently observed during courtship or when dolphins are pursuing sharks.

Utilizing the extensive data collected by WDP, Google has constructed DolphinGemma, which builds upon its existing lightweight AI model known as Gemma. This new model is designed to analyze a vast library of dolphin recordings, identifying patterns, structures, and potential meanings behind the vocalizations of these marine mammals.

Over time, DolphinGemma aims to categorize dolphin sounds into distinct groups—similar to words, sentences, or expressions in human language. According to a blog post from Google, “By identifying recurring sound patterns, clusters, and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins’ natural communication—a task previously requiring immense human effort.”

The project envisions that these identified patterns, combined with synthetic sounds created by researchers to represent objects that dolphins enjoy interacting with, may eventually lead to the establishment of a shared vocabulary for interactive communication between humans and dolphins.

DolphinGemma employs audio recording technology from Google’s Pixel phones to capture high-quality sound recordings of dolphin vocalizations. This technology is adept at isolating dolphin clicks and whistles from background noise, such as waves, boat engines, or underwater static. Clean audio is essential for AI models like DolphinGemma, as noisy data can hinder the AI’s ability to learn effectively.

Google plans to release DolphinGemma as an open model this summer, making it accessible for researchers worldwide to utilize and adapt for their own studies. Although the model has been primarily trained on Atlantic spotted dolphins, researchers believe it could also be fine-tuned to study other species, such as bottlenose or spinner dolphins.

In a statement, Google expressed its hope that by providing tools like DolphinGemma, researchers globally will be empowered to analyze their own acoustic datasets, accelerate the search for patterns, and collectively enhance our understanding of these intelligent marine mammals.

As this groundbreaking project unfolds, the potential for deeper human-dolphin communication may soon become a reality, opening new avenues for interaction with one of the ocean’s most fascinating inhabitants, according to Fox News.

AI Robot Provides Emotional Support for Pets

Aura, an AI-powered pet robot by Tuya Smart, aims to enhance emotional care for pets by tracking their behavior and providing real-time interaction.

Tuya Smart has unveiled Aura, its first AI-powered companion robot designed specifically for household pets, including cats and dogs. This innovative device utilizes artificial intelligence to recognize pet behaviors, movements, and vocal cues, addressing a growing need for emotional engagement in pet care.

The concept behind Aura is straightforward: pets require more than just food and surveillance; they need attention, interaction, and reassurance. Aura actively monitors pets at home, observing behavioral changes and responding in real time, which helps owners gain insights into their pets’ emotional states. Many pets experience stress or anxiety when left alone for extended periods, with subtle signs often emerging first. For instance, a dog may stop playing, while a cat might hide or groom excessively. Aura steps in during these quiet moments, providing engagement and companionship rather than leaving pets in an empty room.

While traditional smart feeders and pet cameras cover basic needs, emotional care presents a different challenge. Pets are inherently social creatures, and their moods can shift rapidly with changes in routine. Aura tracks behavior and listens for variations in sound patterns, allowing it to discern whether a pet is feeling excited, anxious, lonely, or relaxed. This information is relayed to the owner’s smartphone in real time, enabling early detection of potential issues.

Aura functions more like a companion than a stationary device. It employs multiple systems throughout the day to keep pets engaged. Rather than waiting for a button press, Aura proactively seeks opportunities for interaction, transforming long, quiet hours into moments of play and stimulation. Additionally, it captures everyday highlights—such as playful bursts, calm naps, and amusing interactions—using AI pet recognition and intelligent tracking. These moments can be automatically compiled into short videos, allowing owners to stay connected with their pets even when they are away. This feature also makes it easier to document and share special moments with family or on social media.

Movement is a key aspect of Aura’s functionality. Equipped with V-SLAM navigation, binocular vision, and AIVI object recognition, Aura can navigate freely around the home while avoiding obstacles. When its battery runs low, it autonomously returns to its charging dock, ensuring it remains ready for action without requiring constant attention from owners.

Aura is designed to integrate with Tuya’s broader ecosystem, which offers services beyond basic pet care. These services include smart pet boarding, health and medical care, behavior training, grooming, customization, and community tools. Rather than focusing on a single task, Aura serves as a central hub for comprehensive pet care that can evolve over time.

While Aura currently targets pet care, the underlying technology has broader implications. The principles of emotional awareness, proactive assistance, and ecosystem integration could also be applied to elder care, home monitoring, and family connectivity. By starting with pets, Tuya establishes a clear emotional use case while laying the groundwork for future advancements in home robotics.

Despite the excitement surrounding Aura, Tuya has yet to announce a release date or pricing details. The company introduced the robot earlier this month at CES 2026, but specifics regarding availability and cost remain unclear. These details are expected to emerge as the company approaches a wider consumer launch.

Aura represents a significant shift in how smart home technology interacts with pets, moving beyond simple monitoring to embrace interaction and emotional awareness. If Aura fulfills its promise, it could provide pet owners with greater peace of mind when leaving their pets home alone, while maintaining a connection throughout the day.

As technology advances to interpret and respond to pet emotions in real time, it raises questions about the role of such devices in our daily routines. Would you trust an AI companion to become part of your pet care regimen, or would that feel like an overstep? Share your thoughts with us at Cyberguy.com.

According to CyberGuy, the future of pet care is evolving with technology that prioritizes emotional well-being.

Google Fast Pair Vulnerability Allows Hackers to Take Control of Headphones

Google has responded to serious security flaws in its Fast Pair technology, which could allow hackers to hijack Bluetooth headphones and other devices, by issuing patches and updating certification requirements.

Google’s Fast Pair technology, designed to simplify Bluetooth connections, is facing significant security vulnerabilities that could allow unauthorized access to headphones, earbuds, and speakers. Researchers from KU Leuven have identified these flaws, which they have dubbed “WhisperPair.” This method enables nearby attackers to connect to devices without the owner’s knowledge, raising serious privacy concerns.

One of the most alarming aspects of this vulnerability is that it affects not only Android users but also iPhone users. Fast Pair operates by broadcasting a device’s identity to nearby phones and computers, facilitating quick connections. However, the researchers discovered that many devices fail to enforce a critical rule: they continue to accept new pairings even when already connected. This oversight creates an opportunity for malicious actors.

Within Bluetooth range, an attacker can silently pair with a device in approximately 10 to 15 seconds. Once connected, they can disrupt calls, inject audio, or even activate the device’s microphone. Notably, this attack can be executed using standard devices such as smartphones, laptops, or low-cost hardware like Raspberry Pi, allowing the attacker to effectively assume control of the device.

The researchers tested 17 Fast Pair-compatible devices from well-known brands, including Sony, Jabra, JBL, Marshall, Xiaomi, Nothing, OnePlus, Soundcore, Logitech, and Google. Alarmingly, most of these products had passed Google’s certification testing, raising concerns about the efficacy of the security checks in place.

Some affected models pose an even greater privacy risk. Certain Google and Sony devices integrate with Find Hub, a feature that uses nearby devices to estimate location. If an attacker connects to a headset that has never been linked to a Google account, they can continuously track the user’s movements. If the victim later receives a tracking alert, it may appear to reference their own device, making it easy to dismiss as an error.

Another issue that many users may overlook is the necessity of firmware updates for headphones and speakers. These updates typically come through brand-specific apps that many users do not install. Consequently, vulnerable devices could remain exposed for extended periods if users do not take action.

The only way to mitigate this vulnerability is by installing a software update provided by the device manufacturer. While many companies have already released patches, updates may not yet be available for every affected model. Users are advised to check directly with their manufacturers to confirm whether a security update exists for their specific device.

Importantly, the flaw does not lie within Bluetooth itself but rather within the convenience layer built on top of it. Fast Pair prioritized speed over strict ownership enforcement, which researchers argue should require cryptographic proof of ownership. Without such measures, convenience features can become potential attack surfaces. Security and ease of use can coexist, but they must be designed in tandem.

In response to these vulnerabilities, Google has been collaborating with researchers to address the WhisperPair flaws. The company began distributing recommended patches to headphone manufacturers in early September and confirmed that its own Pixel headphones have been updated.

A Google spokesperson stated, “We appreciate collaborating with security researchers through our Vulnerability Rewards Program, which helps keep our users safe. We worked with these researchers to fix these vulnerabilities, and we have not seen evidence of any exploitation outside of this report’s lab setting. As a best security practice, we recommend users check their headphones for the latest firmware updates. We are constantly evaluating and enhancing Fast Pair and Find Hub security.”

Google has indicated that the core issue stemmed from some accessory manufacturers not fully adhering to the Fast Pair specification, which requires devices to accept pairing requests only when a user has intentionally placed the device into pairing mode. Failures to enforce this rule contributed to the audio and microphone risks identified by researchers.

To mitigate future risks, Google has updated its Fast Pair Validator and certification requirements to explicitly test whether devices properly enforce pairing mode checks. The company has also provided accessory partners with fixes intended to resolve all related issues once applied.

On the location tracking front, Google has implemented a server-side fix that prevents accessories from being silently enrolled into the Find Hub network if they have never been paired with an Android device. This change addresses the tracking risk across all devices, including Google’s own accessories.

Despite these efforts, researchers have expressed concerns about the speed at which patches reach users and the extent of Google’s visibility into real-world exploitation that does not involve Google hardware. They argue that weaknesses in certification allowed flawed implementations to reach the market at scale, indicating broader systemic issues.

For now, both Google and the researchers agree on one crucial point: users must install manufacturer firmware updates to ensure protection, and the availability of these updates may vary by device and brand.

While users cannot entirely disable Fast Pair, they can take steps to reduce their exposure. If you use a Bluetooth accessory that supports Google Fast Pair, including wireless earbuds, headphones, or speakers, you may be affected. Researchers have developed a public lookup tool that allows users to check whether their specific device model is vulnerable. This tool can be accessed at whisperpair.eu/vulnerable-devices.

To enhance security, users are encouraged to install the official app from their headphone or speaker manufacturer, check for firmware updates, and apply them promptly. Pairing new devices in private spaces and being cautious of unexpected audio interruptions or strange sounds can also help mitigate risks. A factory reset can remove unauthorized pairings, but it does not resolve the underlying vulnerability; a firmware update is still necessary.

Bluetooth should only be active during use, and turning it off when not in use can limit exposure, although it does not eliminate the risk if the device remains unpatched. Always factory reset used headphones or speakers before pairing them to remove hidden links and account associations. Additionally, promptly installing operating system updates can block exploit paths even when accessory updates lag behind.

The WhisperPair vulnerabilities highlight how small conveniences can lead to significant privacy failures. While headphones may seem innocuous, they contain microphones, radios, and software that require regular attention and updates. Neglecting these devices can create blind spots that attackers are eager to exploit. Staying secure now necessitates a proactive approach to devices that users may have previously taken for granted.

For further information and updates, users can refer to CyberGuy.

Smart Pill Technology Confirms When Medication Is Swallowed

The Massachusetts Institute of Technology has developed a smart pill that confirms medication ingestion, potentially improving patient adherence and health outcomes while safely breaking down in the body.

Engineers at the Massachusetts Institute of Technology (MIT) have designed an innovative smart pill that confirms when a patient has swallowed their medication. This advancement aims to enhance treatment tracking for healthcare providers and help patients adhere to their medication schedules, ultimately reducing the risk of missed doses that can jeopardize health.

The smart pill incorporates a tiny, biodegradable radio-frequency antenna made from zinc and cellulose, materials that are already established as safe for medical use. This system fits within existing pill capsules and operates by emitting a signal that can be detected by an external receiver, potentially integrated into a wearable device, from a distance of up to two feet.

This entire process occurs within approximately ten minutes after ingestion. Unlike previous smart pill designs that utilized components that remained intact throughout the digestive system, raising concerns about long-term safety, the MIT team has taken a different approach. Most parts of the antenna decompose in the stomach within days, leaving only a small off-the-shelf RF chip that naturally passes through the body.

Lead researcher Mehmet Girayhan Say emphasized the goal of the project: to provide a reliable confirmation of medication ingestion without the risk of long-term buildup in the body.

This smart pill is not intended for every type of medication but is specifically designed for situations where missing a dose can have serious consequences. Potential beneficiaries include patients who have undergone organ transplants, those managing tuberculosis, and individuals with complex neurological conditions. For these patients, adherence to prescribed medication can be the difference between recovery and severe complications.

Senior author Giovanni Traverso highlighted that the primary focus of this technology is on patient health. The aim is to support individuals rather than monitor them. The research team has published its findings in the journal Nature Communications and is planning further preclinical testing, with human trials expected to follow as the technology progresses toward real-world application.

This research has received funding from several sources, including Novo Nordisk, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital Division of Gastroenterology, and the U.S. Advanced Research Projects Agency for Health.

Missed medication doses contribute to hundreds of thousands of preventable deaths annually and add billions of dollars to healthcare costs. This issue is particularly critical for patients who require consistent treatment over extended periods. For individuals in vulnerable health situations, such as organ transplant recipients or those with chronic illnesses, the implications of missed doses can be life-altering.

While the smart pill technology is still in development, it offers the potential to provide an additional layer of safety for patients relying on critical medications. It could alleviate some of the pressures faced by patients managing complex treatment plans and reduce uncertainty for healthcare providers regarding patient adherence.

However, the introduction of such technology also raises important questions about privacy, consent, and the sharing of medical data. Any future implementation will need robust safeguards to protect patient information.

For those awaiting the availability of this technology, there are still effective ways to stay on track with medication regimens. Utilizing built-in tools on smartphones can help individuals manage their medication schedules effectively.

The concept of a pill that confirms ingestion may seem futuristic, but it addresses a pressing issue in healthcare. By combining simple materials with innovative engineering, MIT researchers have created a tool that could potentially save lives without leaving harmful residues in the body. As testing continues, this approach could significantly reshape the monitoring and delivery of medical treatments.

Would you be comfortable taking a pill that reports when you swallow it if it meant better health outcomes? Share your thoughts with us at Cyberguy.com.

According to MIT, this groundbreaking technology could transform medication adherence and patient care.

Potential Discovery of New Dwarf Planet Challenges Planet Nine Theory

The potential discovery of a new dwarf planet, 2017OF201, may provide fresh insights into the elusive Planet Nine theory and the structure of the Kuiper Belt.

A team of scientists at the Institute for Advanced Study’s School of Natural Sciences in Princeton, New Jersey, has announced the potential discovery of a new dwarf planet, which could lend support to the theory of a theoretical super-planet known as Planet Nine.

The object, designated 2017OF201, is classified as a trans-Neptune object (TNO), which refers to minor planets that orbit the Sun at distances greater than that of Neptune. Located on the fringes of our solar system, 2017OF201 stands out due to its significant size and unusual orbital characteristics.

Led by researchers Sihao Cheng, Jiaxuan Li, and Eritas Yang from Princeton University, the team utilized advanced computational methods to track the object’s distinctive trajectory in the night sky. Cheng noted that the aphelion, or the farthest point in the orbit from the Sun, of 2017OF201 is more than 1,600 times that of Earth’s orbit. In contrast, its perihelion, the closest point to the Sun, is 44.5 times that of Earth’s orbit, a pattern reminiscent of Pluto’s orbit.

2017OF201 takes approximately 25,000 years to complete a single orbit around the Sun. Yang suggested that the object likely experienced close encounters with a giant planet, which may have resulted in its ejection to a wide orbit. Cheng elaborated on this idea, proposing that the object might have initially been expelled to the Oort Cloud, the most distant region of our solar system, before being drawn back toward the Sun.

This discovery has important implications for our understanding of the outer solar system’s structure. In January 2016, astronomers Konstantin Batygin and Mike Brown from the California Institute of Technology (Caltech) presented research suggesting the existence of a planet approximately 1.5 times the size of Earth, located in the outer solar system. However, the existence of this so-called Planet Nine remains theoretical, as neither Batygin nor Brown has directly observed the planet.

According to the theory, Planet Nine is thought to be roughly the size of Neptune and located far beyond Pluto, in the vicinity of the Kuiper Belt, where 2017OF201 was discovered. If it exists, Planet Nine could possess a mass up to ten times that of Earth and orbit the Sun from a distance up to 30 times greater than that of Neptune. It is estimated that this hypothetical planet would take between 10,000 and 20,000 Earth years to complete one full orbit around the Sun.

Previously, the region beyond the Kuiper Belt was believed to be largely empty. However, the discovery of 2017OF201 suggests that this area may be more populated than previously thought. Cheng remarked that only about 1% of 2017OF201’s orbit is currently visible to astronomers.

“Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system,” Cheng stated in the announcement.

Nasa has indicated that if Planet Nine does exist, it could help explain the peculiar orbits of certain smaller objects within the distant Kuiper Belt. As it stands, the existence of Planet Nine remains largely theoretical, with its potential presence inferred from gravitational patterns observed in the outer solar system.

This latest discovery underscores the ongoing quest to understand the complexities of our solar system and the potential for finding new celestial bodies that may reshape our understanding of its structure.

According to Fox News, the implications of 2017OF201’s discovery could be significant for future research into the outer solar system.

Meta Limits Teen Access to AI Characters for Safety Reasons

Meta Platforms will temporarily restrict access to AI characters for teenagers as it develops a new, age-appropriate version that includes parental controls and adheres to PG-13 content guidelines.

Meta Platforms announced on Friday that it will suspend access to its AI characters for teenagers across all its applications globally. This decision comes as the company works on a revised version of the feature tailored specifically for younger users.

The initiative reflects Meta’s commitment to refining the interaction between its AI products and teenage users amid increasing scrutiny regarding safety, age-appropriate design, and the implications of generative AI on social media platforms.

“Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready,” Meta stated.

Once the revamped AI characters are launched, they will incorporate parental controls, allowing families greater oversight of how younger users engage with the technology. This move follows a preview of these controls released in October, where Meta indicated that parents would have the option to disable private chats between their teens and AI characters. This response was prompted by growing concerns over reports of flirtatious interactions between chatbots and minors on its platforms.

Despite the announcement, Meta clarified that these parental controls are not yet operational. Additionally, the company has committed to ensuring that its AI experiences for teenagers adhere to the PG-13 movie rating framework, aiming to restrict exposure to content considered inappropriate for minors.

The changes come at a time when U.S. regulators are intensifying their examination of AI companies and the potential risks associated with chatbots. In August, reports indicated that Meta’s internal AI guidelines had permitted provocative conversations involving minors, further amplifying the pressure on the company to enhance its safety measures.

As the landscape of AI technology continues to evolve, Meta’s proactive approach aims to address the concerns of parents and regulators alike, ensuring a safer online environment for younger users.

The post Meta to block teen access to AI characters appeared first on The American Bazaar, according to The American Bazaar.

Ransomware Attack Exposes Social Security Numbers at Major Gas Station Chain

A recent ransomware attack on a Texas gas station chain has exposed the personal information of over 377,000 individuals, raising concerns about data security in the retail sector.

A ransomware attack on a Texas-based gas station chain has resulted in the exposure of sensitive personal data for more than 377,000 individuals, including Social Security numbers and driver’s license information. This incident underscores the vulnerabilities that exist in industries that handle large volumes of personal data but may lack robust cybersecurity measures.

The breach was reported by Gulshan Management Services, Inc., which is affiliated with Gulshan Enterprises, the operator of approximately 150 Handi Plus and Handi Stop gas stations and convenience stores throughout Texas. According to a disclosure filed with the Maine Attorney General’s Office, the company detected unauthorized access to its IT systems in late September.

Investigators later discovered that the attackers had infiltrated the network for about ten days before the breach was identified. The intrusion began with a phishing attack, highlighting the risks associated with deceptive emails that can lead to significant data breaches.

During this period, the attackers accessed and stole a range of personal information, subsequently deploying ransomware that encrypted files across Gulshan’s systems. The compromised data includes names, contact details, Social Security numbers, and driver’s license numbers, all of which pose serious risks for identity theft and fraud that may manifest long after the breach.

As of now, no ransomware group has publicly claimed responsibility for the attack. While this may seem like a silver lining, it does not alleviate the risks for those affected. In many ransomware incidents, the absence of a claim can indicate that the attackers have not yet released the stolen data publicly or that the victim company has resolved the situation privately.

Gulshan’s filing indicates that the company restored its systems using known-safe backups, suggesting that it opted to rebuild rather than negotiate with the attackers. However, once sensitive data has been extracted from a network, it cannot be retracted, leaving affected individuals at risk regardless of whether the stolen information appears online.

This incident highlights a recurring issue within the retail and service sectors, where businesses often rely on outdated systems and employees who may be vulnerable to phishing attacks. Although gas stations may not seem like obvious targets for cybercriminals, their payment systems, loyalty programs, and human resources databases make them attractive for data breaches.

In light of this breach, individuals whose information may have been compromised should take proactive steps to mitigate potential fallout. If the company offers free credit monitoring or identity protection services, it is advisable to enroll in those programs. Such services can provide early alerts if someone attempts to open accounts or misuse personal information.

If no such services are offered, individuals should consider signing up for a reputable identity theft protection service independently. These services can monitor personal information, such as Social Security numbers and email addresses, and alert users if their data is being sold on the dark web or used to open accounts fraudulently.

Additionally, employing a password manager can help create and store unique passwords for each account, further securing personal information against unauthorized access. Users should also check if their email addresses have been involved in past data breaches and change any reused passwords immediately if they find a match.

Implementing two-factor authentication (2FA) adds another layer of security, particularly for email, banking, and shopping accounts, which are often primary targets for cybercriminals. Furthermore, maintaining strong antivirus software can help detect phishing attempts and suspicious activity before they escalate into significant breaches.

After incidents like this, scammers frequently exploit the situation by sending fake emails or texts impersonating the affected company or credit monitoring services. It is crucial to verify any messages independently and avoid clicking on unexpected links.

Individuals should regularly review their credit reports from major bureaus for unfamiliar accounts or inquiries. They are entitled to free reports, and early detection of issues can facilitate easier resolutions.

If a Social Security number has been compromised, placing a credit freeze can prevent lenders from opening new accounts in the victim’s name, even if they possess personal details. Credit bureaus provide this service at no charge, and it can be temporarily lifted when applying for credit. Alternatively, individuals may opt for a fraud alert, which requires lenders to verify identity before approving credit.

Moreover, when Social Security numbers are stolen, tax fraud often follows, as criminals can file fake tax returns to claim refunds. An IRS Identity Protection PIN (IP PIN) can help prevent this by ensuring that only the rightful owner can file a tax return using their SSN.

It is essential to not only monitor for new fraud but also to secure existing accounts. Setting up alerts for large transactions or changes to contact information can help detect unauthorized activity early. If personal information has been compromised, contacting banks for additional protections is advisable.

This incident serves as a stark reminder that personal data is not only held by banks and healthcare providers but also by retailers and service operators. As cybercriminals exploit vulnerabilities through simple phishing emails, the potential for widespread damage increases significantly. While individuals cannot prevent such breaches, they can take steps to limit the impact of stolen data by securing their accounts and remaining vigilant.

For more information on how to protect yourself from identity theft and data breaches, visit Cyberguy.com.

Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a face-mounted electronic tattoo, or “e-tattoo,” to monitor mental workload in high-stress professions, utilizing EEG and EOG technology for brain activity analysis.

Scientists have introduced an innovative solution designed to help individuals in high-pressure work environments monitor their cognitive performance. This new device, known as an electronic tattoo or “e-tattoo,” is applied to the forehead and is intended to track brainwaves and mental workload.

A study published in the journal Device outlines the advantages of e-tattoos as a cost-effective and user-friendly method for assessing mental workload. Dr. Nanshu Lu, the senior author of the research from the University of Texas at Austin, emphasized that mental workload is a critical component in human-in-the-loop systems, significantly affecting cognitive performance and decision-making.

In an email to Fox News Digital, Dr. Lu noted that the motivation behind this device stems from the needs of professionals in high-demand, high-stakes jobs, including pilots, air traffic controllers, doctors, and emergency dispatchers. The technology could also benefit emergency room doctors and operators of robots or drones, enhancing both training and performance.

One of the primary objectives of the study was to develop a method for measuring cognitive fatigue in roles that require intense mental focus. The e-tattoo is designed to be temporarily affixed to the forehead and is notably smaller than existing devices on the market.

The device operates by employing electroencephalogram (EEG) and electrooculogram (EOG) technologies to monitor brain waves and eye movements. Traditional EEG and EOG machines tend to be bulky and expensive; however, the e-tattoo presents a compact and affordable alternative.

Dr. Lu explained, “We propose a wireless forehead EEG and EOG sensor designed to be as thin and conformable to the skin as a temporary tattoo sticker, which is referred to as a forehead e-tattoo.” She further noted that understanding human mental workload is essential in the fields of human-machine interaction and ergonomics due to its direct impact on cognitive performance.

The study involved six participants who were tasked with identifying letters displayed on a screen. The letters appeared one at a time in various locations, and participants were instructed to click a mouse if either the letter or its position matched one shown previously. Each participant completed the task multiple times, with varying levels of difficulty.

The researchers observed that as the tasks increased in complexity, the brainwave patterns detected by the e-tattoo indicated a corresponding rise in mental workload. The device comprises a battery pack, reusable chips, and a disposable sensor, making it both practical and efficient for use in cognitive assessments.

Currently, the e-tattoo exists as a laboratory prototype. Dr. Lu mentioned that further development is necessary before it can be commercialized, including the implementation of real-time mental workload decoding and validation in more realistic settings. The prototype is estimated to cost around $200.

This groundbreaking research highlights the potential for e-tattoos to revolutionize how professionals in high-stress jobs monitor their cognitive health and performance, paving the way for advancements in training and operational efficiency.

According to Fox News, the development of this technology could significantly impact various fields by providing a more accessible means of tracking mental workload and cognitive fatigue.

Web Skimming Attacks Target Major Payment Networks and Consumers

Researchers are tracking a persistent web skimming campaign that targets major payment networks, using malicious JavaScript to steal credit card information from unsuspecting online shoppers.

As online shopping becomes increasingly familiar and convenient, a hidden threat lurks beneath the surface. Researchers are monitoring a long-running web skimming campaign that specifically targets businesses connected to major payment networks. This technique enables criminals to secretly insert malicious code into checkout pages, allowing them to capture payment details as customers enter them. Often, these attacks operate unnoticed within the browser, leaving victims unaware until unauthorized charges appear on their statements.

The term “Magecart” refers to various groups that specialize in web skimming attacks. These attacks primarily focus on online stores where customers input payment information during the checkout process. Rather than directly hacking banks or card networks, attackers embed malicious code into a retailer’s checkout page. This code, typically written in JavaScript, is a standard programming language used to enhance website interactivity, such as managing forms and processing payments.

In Magecart attacks, criminals exploit this same JavaScript to covertly capture card numbers, expiration dates, security codes, and billing details as shoppers input their information. The checkout process continues to function normally, providing no immediate warning signs to users. Initially, Magecart referred specifically to attacks on Magento-based online stores, but the term has since expanded to encompass web skimming campaigns across various e-commerce platforms and payment systems.

Researchers indicate that this ongoing campaign targets merchants linked to several major payment networks. Large enterprises that depend on these payment providers face heightened risks due to their complex websites and reliance on third-party integrations. Attackers typically exploit overlooked vulnerabilities, such as outdated plugins, vulnerable third-party scripts, and unpatched content management systems. Once they gain access, they inject JavaScript directly into the checkout flow, allowing the skimmer to monitor form fields associated with card data and personal information. This data is then quietly transmitted to servers controlled by the attackers.

To evade detection, the malicious JavaScript is often heavily obfuscated. Some variants can even remove themselves if they detect an admin session, creating a false impression of a clean inspection. Researchers have also noted that the campaign utilizes bulletproof hosting services, which ignore abuse reports and takedown requests, providing attackers with a stable environment to operate. Because web skimmers function within the browser, they can circumvent many server-side fraud controls employed by merchants and payment providers.

Magecart campaigns simultaneously impact three groups: the online retailers, the customers, and the payment networks. This shared vulnerability complicates detection and response efforts.

While consumers cannot rectify compromised checkout pages, adopting a few smart habits can help mitigate exposure, limit the misuse of stolen data, and facilitate quicker detection of fraud. One effective strategy is to use virtual and single-use cards, which are digital card numbers linked to a real credit or debit account without revealing the actual number. These cards function like standard cards during checkout but provide an additional layer of security. Many people can access these services through their existing banking apps or mobile wallets, such as Apple Pay and Google Pay, which generate temporary card numbers for online transactions.

A single-use card typically works for one purchase or expires shortly after use, while a virtual card can remain active for a specific merchant and be paused or deleted later. If a web skimming attack captures one of these numbers, attackers are generally unable to reuse it elsewhere, significantly limiting financial damage and making it easier to halt fraud.

Transaction alerts can notify users the moment their card is used, even for minor purchases. If web skimming leads to fraudulent activity, these alerts can quickly reveal unauthorized charges, allowing cardholders to freeze their accounts before losses escalate. For instance, a small test charge of $2 could indicate fraud before larger transactions occur.

Using strong, unique passwords for banking and card portals can also reduce the risk of account takeovers. A password manager can assist in generating and securely storing these credentials. Additionally, individuals should check if their email addresses have been compromised in past data breaches. Many password managers include built-in breach scanners that alert users if their information appears in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Robust antivirus software can block connections to malicious domains used to collect skimmed data and alert users about unsafe websites. This protection is essential for safeguarding personal information and digital assets from potential threats, including phishing emails and ransomware scams.

Data removal services can also help minimize the amount of personal information exposed online, making it more challenging for criminals to match stolen card data with complete identity details. While no service can guarantee complete data removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and reducing the risk of targeted attacks.

Regularly reviewing financial statements, even for small charges, is another prudent practice, as attackers often test stolen cards with low-value transactions. The Magecart web skimming campaign illustrates how attackers can exploit trusted checkout pages without disrupting the shopping experience. Although consumers cannot fix compromised sites, implementing simple safeguards can help reduce risk and facilitate early detection of fraud. Online payments rely on trust, but this campaign underscores the importance of pairing that trust with caution.

As awareness of web skimming grows, consumers may find themselves reconsidering the safety of online checkout processes. For further information and resources on protecting against these threats, visit CyberGuy.com.

Indian-American CEO Vasudha Badri-Paul Launches AI Accelerator in East Bay

Vasudha Badri-Paul, founder and CEO of Avatara AI, discusses her transition from corporate life to launching an AI accelerator aimed at fostering innovation in California’s East Bay.

Vasudha Badri-Paul, the founder and CEO of Avatara AI, has embarked on an ambitious journey to reshape the landscape of artificial intelligence startups in California’s East Bay. After a lengthy corporate career, she is now focused on building an AI accelerator that aims to nurture the next generation of innovators.

In 2023, Badri-Paul established Avatara AI, a San Francisco-based firm dedicated to helping businesses design and manage AI solutions. She recognized the urgent need for companies to adapt to the rapidly evolving AI landscape. “AI is advancing at such a rapid pace that failing to continuously update your skills can leave you obsolete almost overnight,” she noted.

However, her decision to leave a stable corporate career was also influenced by the Bay Area’s unpredictable hiring environment. “I would say that the job lifespan in the Bay Area is two years, and it’s the same across sectors—corporate, tech, marketing, sales, everywhere,” she explained. With experience at major corporations like Pfizer, Microsoft, GE, Cisco, and Intel, Badri-Paul has witnessed firsthand the constant churn in the job market.

She elaborated on the challenges of this cycle, stating, “There is a constant churn. Reasons range from no funding to restructuring, and people are asked to leave every few years. This recurring cycle in the Bay Area job market that results in redundancies gets tiring after a while. Everyone is watching their back; there is no margin for humanity.”

Frustrated by this instability, Badri-Paul decided to take a bold step: “I took a hard stance and thought of building a company of my own.” As an early innovator in the AI space, she recognized the transformative potential of AI across various sectors. At Avatara, she oversees the development and deployment of AI solutions, focusing on responsible and ethical practices.

In addition to her work at Avatara, Badri-Paul is enthusiastic about the opportunities emerging in the East Bay region. She recently launched the Velocity East Accelerator, which she envisions as a catalyst for the future of AI in the area. “In California, Silicon Valley is where all the tech happens. It is the start-up empire. Despite this boom, some parts of Silicon Valley remain underrepresented, and we have been seeing a shift in the trend,” she stated.

Badri-Paul believes that the East Bay is on the verge of significant growth. “East Bay has kind of taken off,” she remarked. Through Velocity East, she aims to create a hub for innovation and entrepreneurship. As a long-time California resident, she has observed how migration patterns have spurred development in the region. “During Covid, a builder built about 20,000 homes in East Bay. A lot of migration happened during that time,” she noted.

Despite the influx of new residents, Badri-Paul observed a lack of formal support for startups in the area. “While there is a boom in newer residents, there was no formal atmosphere to nurture startups in the area, no Y Combinators—basically no ecosystem to help build ideas,” she explained.

With this vision in mind, she launched Velocity East, an AI accelerator based in San Ramon. Badri-Paul emphasized that the goal of the accelerator is not to replicate existing tech programs but to highlight the potential for groundbreaking AI companies to emerge from the East Bay. “We are talking about areas such as Fremont, Concord, as well as across Alameda and Contra Costa counties,” she said.

Velocity East is powered by The AI Foundry community and aims to accelerate early-stage AI startups through mentorship, resources, and access to capital. Badri-Paul added, “We also build bridges between East Bay innovators and the broader Bay Area ecosystem and create pathways for underrepresented founders to lead in AI.”

Her larger vision is to establish San Ramon and Bishop Ranch as legitimate hubs for AI innovation, shining a spotlight on the East Bay as a vital player in the tech landscape.

As Badri-Paul continues to navigate her entrepreneurial journey, she remains committed to fostering an environment where innovation can thrive, ensuring that the East Bay is recognized as a key contributor to the future of artificial intelligence.

According to The American Bazaar, Badri-Paul’s efforts represent a significant shift in the tech ecosystem, highlighting the importance of nurturing local talent and ideas.

Rising Data Center Growth May Lead to Increased Electricity Costs

A new study reveals that the rapid growth of data centers could significantly increase electricity costs and strain power grids, posing environmental challenges.

A recent study conducted by the Union of Concerned Scientists highlights the potential consequences of the rapid construction of data centers, warning that this surge in demand for electricity could lead to soaring energy costs and environmental harm.

Published on Monday, the report indicates that the pace at which data centers are being built is outstripping the ability of utilities to supply adequate electricity. Mike Jacobs, a senior manager of energy at the organization, emphasized the challenge: “They’re increasing the demand faster than you can increase the supply. How’re you going to do that?”

The report, titled “Data Center Power Play,” models various electricity demand scenarios over the next 25 years, alongside different energy policy approaches to meet these demands. The study aims to estimate the potential costs in terms of electricity, climate impact, and public health, which could amount to trillions of dollars.

Jacobs noted that implementing clean energy policies could mitigate these costs while reducing air pollution and health impacts. He pointed out that the construction of an electric grid capable of meeting the rising demand for power will take significantly longer than building new data centers.

“This is a collision between the people whose philosophy is ‘move fast and break things,’ with the utility industry that has nobody that says move fast and break things,” Jacobs remarked, referring to the rapid expansion of data center facilities. He also mentioned that predicting future demand for data centers is challenging due to limited information from utilities and major tech companies. How this demand is addressed will be crucial for both public health and environmental sustainability.

Jacobs further stated, “This is really a great moment for regulators to do what’s within their authority and sort out and assign the costs to those who cause them, which is an essential principle of utility ratemaking.”

In recent years, tech companies have aggressively expanded their data center operations, driven by the booming demand for artificial intelligence. Major firms such as OpenAI, Google, Meta, and Amazon have made substantial investments in data centers, with projects like Stargate serving as critical infrastructure for AI development.

While the growth of data centers brings job opportunities and digital advancements, it also raises significant concerns regarding their substantial energy and water consumption. Data centers typically rely on water-intensive cooling systems, which can exacerbate existing water scarcity issues.

For instance, a single 100 megawatt (MW) data center can consume over two million liters of water daily, an amount comparable to the daily usage of approximately 6,500 households. This demand is particularly concerning in regions already facing water shortages, such as parts of Georgia, Texas, Arizona, and Oregon, where it places additional stress on aquifers and municipal water supplies.

The findings of this study underscore the urgent need for a balanced approach to energy policy and infrastructure development, ensuring that the growing demands of data centers do not come at the expense of environmental sustainability and public health, according to The Union of Concerned Scientists.

U.S. Supports India-Singapore Submarine Cable Project for Enhanced Connectivity

The U.S. Trade and Development Agency has announced support for a submarine cable project linking India and Singapore, aimed at enhancing connectivity and security in Southeast Asia.

WASHINGTON, DC – On January 20, the U.S. Trade and Development Agency (USTDA) announced its backing for a proposed submarine cable system that will connect India with Singapore and key data hubs across Southeast Asia.

The planned cable route is set to link Chennai, India, with Singapore, while additional landing points are under consideration in Malaysia, Thailand, and Indonesia, according to USTDA.

As part of this initiative, USTDA has signed an agreement with SubConnex Malaysia Sdn. Bhd. to fund a feasibility study for the SCNX3 submarine cable system. This project is expected to serve approximately 1.85 billion people by enhancing digital infrastructure in the region.

The feasibility study aims to attract investment for the cable system and expand the capacity necessary for Artificial Intelligence and cloud-based services. USTDA emphasized that this effort will also help ensure the reliability and security of international networks while minimizing exposure to cyber threats and foreign interference.

The agreement was formalized during the Pacific Telecommunications Council 26 conference held in Honolulu, Hawaii.

SubConnex has appointed Florida-based APTelecom LLC to conduct the feasibility study. The study will encompass various aspects, including route design, engineering, financial modeling, commercialization planning, and regulatory analysis.

The SCNX3 submarine cable is designed to address the increasing connectivity challenges faced by India and Southeast Asia. USTDA noted that the rising demand for digital services, coupled with limited route diversity, has rendered existing networks susceptible to outages and security vulnerabilities.

By introducing new and resilient data pathways, the project is anticipated to enhance digital access and support the growth of Artificial Intelligence and cloud services. USTDA stated that the cable will provide a secure and reliable communications infrastructure for governments, businesses, and citizens throughout South and Southeast Asia.

Furthermore, USTDA highlighted that the feasibility study will promote the use of secure cable technology, safeguarding data flows from potential malicious foreign influences. This concern is increasingly relevant as undersea cables facilitate the majority of global internet and data traffic.

According to IANS, the initiative represents a significant step toward improving digital connectivity in the region.

Dialog Aims to Strengthen Ethical Canada-India AI Collaboration

India and Canada strengthen their partnership in artificial intelligence through the ‘India-Canada AI Dialogue 2026,’ focusing on ethical and inclusive AI development.

TORONTO — The Consulate General of India in Toronto recently hosted the ‘India-Canada AI Dialogue 2026,’ highlighting India’s pivotal role in fostering inclusive, responsible, and impactful artificial intelligence (AI). This event underscored the importance of bilateral cooperation for mutual economic and societal benefits.

Organized in collaboration with the University of Waterloo, the Canada India Tech Council, and Zoho Inc., the dialogue attracted over 600 senior leaders. Participants included C-suite executives, policymakers, and researchers from various sectors, including government, industry, academia, and the innovation ecosystem across Canada. The gathering aimed to enhance collaboration in the field of artificial intelligence.

Dinesh K. Patnaik, the High Commissioner of India to Canada, emphasized the significance of the dialogue, stating, “The India-Canada AI Dialogue 2026 reflects our shared vision for shaping the future of artificial intelligence responsibly. As we build momentum toward the India AI Impact Summit 2026 in New Delhi, this engagement highlights how trusted partners like Canada can collaborate with India to drive innovation that is inclusive, ethical, and globally relevant.”

Canadian Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, addressed the attendees, noting, “AI is no longer an abstract or future-facing conversation — it’s shaping how we work, govern, and relate to one another. What makes the India-Canada AI Dialogue so important is that it puts impact, accountability, and human outcomes at the center of the discussion. India and Canada bring different strengths, but a shared responsibility: to make sure this technology serves people, strengthens societies, and delivers real economic value.”

Doug Ford, the Premier of Ontario, also shared his insights on the dialogue’s significance, stating, “India and Canada share a deep and long-standing partnership, one built on robust trade and investment, people-to-people ties, and research partnerships in emerging technologies such as artificial intelligence.”

The dialogue serves as a platform for both nations to explore innovative solutions in AI while ensuring that ethical considerations remain at the forefront of technological advancements. As the world increasingly relies on AI, the collaboration between India and Canada is poised to set a precedent for responsible AI development globally.

According to IANS, the event marks a significant step in enhancing the Canada-India relationship in the tech sector, particularly in artificial intelligence.

Indian-American Anjeneya Dubey Appointed CTO of Imagine Learning

Anjeneya Dubey, an Indian American cloud and AI leader, has been appointed Chief Technology Officer at Imagine Learning to enhance its AI-driven educational solutions.

Anjeneya Dubey, a prominent Indian American leader in cloud and artificial intelligence, has joined Imagine Learning as Chief Technology Officer (CTO). In this role, he will focus on advancing the company’s Curriculum-Informed AI roadmap, which aims to enhance educator-trusted platforms that connect curriculum, insights, and educational impact.

Imagine Learning, based in Tempe, Arizona, is recognized as a leading provider of digital-first K–12 solutions in the United States. Dubey’s appointment is part of the company’s strategy to ensure that instructional rigor, educator trust, and adaptive innovation remain central to every product experience.

With over two decades of global experience in software engineering, AI innovation, and cloud platforms, Dubey brings a wealth of expertise to his new position. Most recently, he served as the Global Head of Platform Engineering at Honeywell, where he led engineering efforts for digital education platforms used across both K–12 and higher education sectors.

Leslie Curtis, Executive Vice President and Chief Administrative Officer of Imagine Learning, expressed enthusiasm about Dubey’s appointment. “As we build the next era of learning technology, we are investing in leadership that understands both the complexity of enterprise-scale systems and the nuance of classroom impact,” she stated. “Anj’s deep background in SaaS products, data and AI platforms, and developer productivity makes him the ideal leader to power our next wave of curriculum-aligned innovation.”

Dubey’s extensive experience includes building Software as a Service (SaaS) platforms and AI-powered delivery pipelines. He has overseen global cloud infrastructure across major platforms such as AWS, Azure, and Google Cloud Platform (GCP), and has led teams of over 400 engineers across five regions. His contributions to the field are further underscored by multiple patents in hybrid and multi-cloud architectures, as well as the design of platforms serving more than 21 million users in both educational and industrial domains.

In his own words, Dubey expressed excitement about joining Imagine Learning at a crucial time. “This role is a chance to shape how AI can responsibly enhance instructional outcomes, deepen personalization, and support the educators who drive student success every day,” he said. “Our goal is to bring meaningful technology to classrooms — not just automation, but intelligence that understands and elevates learning.”

Dubey’s appointment reflects a broader trend within the education industry, which is increasingly seeking executive talent from cloud-native and AI-forward organizations. Imagine Learning’s strategic move underscores its commitment to maintaining its position as a market leader focused on instructional quality and platform intelligence.

As CTO, Dubey will oversee Imagine Learning’s engineering, DevOps, AI/ML, and cloud teams. His initial initiatives will focus on strengthening the company’s curricula data pipeline, accelerating time-to-insight for educators, and enhancing product reliability for over 18 million students across the nation.

Dubey holds a Bachelor of Technology degree in Electronics and Communication from Madan Mohan Malaviya University of Technology in India, as well as an Executive Certificate in Business Administration and Management from the Mendoza College of Business at the University of Notre Dame.

This appointment marks a significant step for Imagine Learning as it continues to innovate and adapt in the rapidly evolving landscape of educational technology, according to a company release.

Harvard Physicist Suggests Interstellar Object May Be Alien Probe

The discovery of a massive interstellar object, 3I/ATLAS, has sparked speculation among scientists, including a Harvard physicist, about its potential technological origins.

A recently discovered interstellar object, known as 3I/ATLAS, is raising eyebrows among astronomers due to its unusual characteristics. Harvard physicist Dr. Avi Loeb suggests that the object’s peculiar features may indicate it is more than just a typical comet.

“Maybe the trajectory was designed,” Dr. Loeb, a science professor at Harvard University, told Fox News Digital. “If it had an objective to sort of be on a reconnaissance mission, to either send mini probes to those planets or monitor them… It seems quite anomalous.”

First detected in early July by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Chile, 3I/ATLAS marks only the third time an interstellar object has been observed entering our solar system, according to NASA.

While NASA has classified 3I/ATLAS as a comet, Dr. Loeb pointed out that images of the object reveal an unexpected glow appearing in front of it, rather than the typical tail that comets exhibit. “Usually with comets, you have a tail, a cometary tail, where dust and gas are shining, reflecting sunlight, and that’s the signature of a comet,” he explained. “Here, you see a glow in front of it, not behind it.”

Measuring approximately 20 kilometers across, 3I/ATLAS is larger than Manhattan and is unusually bright given its distance from the sun. However, Dr. Loeb emphasizes that its most striking feature is its trajectory.

“If you imagine objects entering the solar system from random directions, just one in 500 of them would be aligned so well with the orbits of the planets,” he noted. The interstellar object, which originates from the center of the Milky Way galaxy, is expected to pass near Mars, Venus, and Jupiter—an event that Dr. Loeb claims is highly improbable to occur by chance.

“It also comes close to each of them, with a probability of one in 20,000,” he added.

According to NASA, 3I/ATLAS will reach its closest point to the sun—approximately 130 million miles away—on October 30.

“If it turns out to be technological, it would obviously have a big impact on the future of humanity,” Dr. Loeb stated. “We have to decide how to respond to that.”

In January, astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics mistakenly identified a Tesla Roadster launched into orbit by SpaceX CEO Elon Musk as an asteroid, highlighting the complexities of identifying objects in space.

A spokesperson for NASA did not immediately respond to requests for comment regarding 3I/ATLAS, leaving the scientific community eager for further insights into this intriguing interstellar visitor.

As the object approaches its closest point to the sun, the implications of its unusual characteristics continue to fuel speculation and debate among astronomers and physicists alike, according to Fox News.

Apple Alerts Users to Security Vulnerability in Millions of iPhones

Apple has issued a warning that a significant security flaw affects approximately 800 million iPhones, urging users to update to iOS 26.2 to mitigate critical vulnerabilities in Safari and WebKit.

Apple’s iPhone, the leading smartphone in the United States and widely used globally, is facing a serious security threat. Recent data indicates that a critical vulnerability could potentially impact around half of all iPhone users, leaving hundreds of millions of devices at risk.

Over the past few weeks, Apple has been alerting users to a significant security flaw that affects an estimated 800 million devices. This vulnerability stems from two critical issues identified in WebKit, the underlying engine that powers Safari and other browsers on iOS. According to Apple, these flaws have been exploited in sophisticated attacks targeting specific individuals, enabling malicious websites to execute harmful code on iPhones and iPads. This could allow attackers to gain control of the device, steal passwords, or access sensitive payment information simply by visiting a compromised site.

In response to this threat, Apple quickly released a software update to address the vulnerabilities. However, reports suggest that many users have yet to install the necessary update. Estimates indicate that approximately 50 percent of eligible users have not upgraded from iOS 18 to the latest version, iOS 26.2. This leaves a staggering number of devices vulnerable worldwide. According to data from StatCounter, the situation may be even more dire, with only about 20 percent of users having completed the update so far. As security details become public, the risk of exploitation increases significantly, as attackers are aware of the vulnerabilities to target.

Apple has specified that certain devices are affected by this vulnerability if they remain unupdated. Users are strongly encouraged to check their devices and ensure they have installed the latest software to protect against potential attacks.

There is no simple setting or browsing habit that can mitigate this issue; the vulnerability is embedded deep within the browser engine. Security experts emphasize that the only effective defense is to install the latest software update. Apple has also discontinued offering a security-only update for users who wish to remain on iOS 18. Unless a device cannot support iOS 26, the fix is only available through the latest versions of iOS 26.2 and iPadOS 26.2.

Updating is generally a straightforward process. If automatic updates are enabled, users may already have the fix installed. For those who need to update manually, the following steps are recommended: ensure the device is connected to Wi-Fi and has sufficient battery life or is plugged in for the update process.

While keeping your iPhone updated is crucial, it should not be the sole line of defense against threats. Utilizing strong antivirus software can provide an additional layer of protection by scanning for malicious links, blocking risky websites, and alerting users to suspicious activity before any damage occurs. This is particularly important given that many attacks exploit compromised websites or hidden browser vulnerabilities. Security software can help identify threats that may slip through and offer greater visibility into device activity.

Think of antivirus software as a backup protection measure. Software updates close known vulnerabilities, while robust antivirus tools help guard against emerging threats.

Apple’s use of the term “extremely sophisticated” in describing the threat underscores the seriousness of the situation. This flaw illustrates how even trusted browsers can become pathways for attacks when updates are delayed. Users who rely on their iPhones for banking, shopping, or work should treat this update as urgent.

As the landscape of cybersecurity continues to evolve, users are left to consider how long they typically wait before installing major iPhone updates. Is that delay worth the risk? Feedback and insights can be shared at Cyberguy.com.

For further information on the best antivirus protection options for Windows, Mac, Android, and iOS devices, visit Cyberguy.com.

According to CyberGuy.com, staying informed and proactive about software updates is essential for maintaining device security.

Andreessen Horowitz Invests $3 Billion in AI Infrastructure Development

Venture capital firm Andreessen Horowitz has made a significant investment of $3 billion in artificial intelligence infrastructure, reflecting its confidence in the sector’s long-term growth potential.

Andreessen Horowitz, one of Silicon Valley’s most influential venture capital firms, is making a bold investment in the future of artificial intelligence (AI), but its approach diverges from the trends seen in the industry.

Commonly referred to as a16z, the firm has committed approximately $3 billion to companies focused on developing the software infrastructure that supports AI. This investment highlights both a strong belief in the long-term growth of AI and a cautious stance regarding the inflated valuations that have characterized the industry in recent years.

In 2024, Andreessen Horowitz launched a dedicated AI infrastructure fund with an initial investment of $1.25 billion. This fund specifically targets startups that create essential tools for developers and enterprises, rather than the more glamorous consumer products dominating headlines. In January, the firm announced an additional investment of around $1.7 billion, bringing its total commitment to approximately $3 billion.

The focus of this fund is on what a16z defines as AI infrastructure. This includes systems that assist technical teams in building, securing, and deploying AI technologies. Key areas of investment encompass coding platforms, foundational model technologies, and networking security tools that are integral to the operation of AI systems.

This strategic move reflects a nuanced understanding of the current landscape, often referred to as the AI bubble. While soaring valuations have drawn parallels to previous tech booms, leaders at Andreessen Horowitz assert that the current frenzy obscures significant advancements occurring beneath the surface.

“Some of the most important companies of tomorrow will be infrastructure companies,” stated Raghuram, a managing partner at the firm and former CEO of VMware, in a recent statement.

The firm’s investment strategy is already yielding positive results. Several AI startups backed by Andreessen Horowitz have achieved lucrative exits or formed valuable partnerships. For instance, Stripe announced its acquisition of Metronome, an AI billing platform supported by the fund, for approximately $1 billion. Additionally, major tech corporations such as Salesforce and Meta have acquired other AI services backed by the firm.

One notable success story is Cursor, an AI coding startup whose valuation skyrocketed to about $29.3 billion last year, a remarkable increase from the $400 million valuation at the time of Andreessen Horowitz’s initial investment.

Despite these successes, concerns linger regarding the overall health of the industry. Critics argue that many private valuations are disconnected from sustainable business fundamentals, with some startups being valued as if they are poised to revolutionize entire sectors overnight.

Ben Horowitz, co-founder and general partner of Andreessen Horowitz, acknowledged that it is premature to draw definitive conclusions about the fund’s performance, which is typically assessed over a decade or more. Nevertheless, he described the fund as “one of the best funds, like, I’ve ever seen.”

The investment strategy is supported by a leadership team that brings a diverse perspective to the table. Martin Casado, a former computational physicist and seasoned coder who oversees the infrastructure unit, noted that while private valuations may appear “crazy,” the demand for AI-focused tools and services remains strong.

Industry analysts suggest that even if certain segments of the market experience a slowdown, a focus on foundational software—rather than merely trendy applications—could position Andreessen Horowitz favorably for the long term.

As the tech sector continues to evolve, the implications of this $3 billion investment will be closely monitored. Whether it will prove successful during a potential tech downturn or reshape how companies implement AI remains one of the most anticipated experiments in the industry.

According to The American Bazaar, Andreessen Horowitz’s strategic focus on AI infrastructure positions it uniquely within a rapidly changing technological landscape.

Novartis Appoints Indian-American Gayathri Raghupathy as Executive Director of AI and Process Excellence

Novartis has appointed Gayathri Raghupathy as Executive Director of Functional AI and Process Excellence, where she will leverage AI to enhance processes and focus on patient care.

Leading innovative medicines company Novartis has announced the appointment of Indian American scientist Gayathri Raghupathy as Executive Director of Functional AI and Process Excellence in U.S. Medical.

In her new role, Raghupathy will collaborate with cross-functional teams to harness artificial intelligence, reimagine critical processes, and scale intelligent solutions that prioritize science and patient care, according to a media release.

“Excited to share about joining Novartis,” Raghupathy expressed on LinkedIn. “I will be working with some amazing teams to harness AI, reimagine processes, and scale intelligent solutions that free us to focus on what matters most: science and patients.”

She also reflected on her career journey, stating, “Grateful for the journey from the lab to medical communications to building AI products in a startup environment, and for the incredible partners who helped shape this path. There’s so much to learn and grow into, and I can’t imagine a better place than Novartis, with its deep commitment to innovation and patients.”

Raghupathy describes herself as a “scientist turned AI strategist who loves turning fuzzy challenges into clear AI opportunities.” She emphasizes her focus on creating AI solutions that address real pain points, connecting various domains such as science, data, process, and operations to design scalable solutions.

“I thrive in fast-paced, 0-to-1 environments where experimentation and teamwork drive progress,” she noted. “Always curious, always learning, and excited about the next wave of human-centered AI in healthcare.”

Prior to her role at Novartis, Raghupathy spent over six years at Kognitic, Inc., a startup where she played a pivotal role in shaping the scientific and business strategy behind AI-enabled intelligence solutions. Most recently, she served as Chief Strategy Officer, having previously held positions such as Vice President of Scientific Strategy and Lead of Scientific & Business Strategy. Her work at Kognitic included driving product innovation, enhancing data quality processes, and collaborating with marketing and medical affairs leaders in the pharmaceutical sector to achieve comprehensive outcomes.

Earlier in her career, Raghupathy worked at BGB Group as a Medical Writer, where she supported scientific content development across various initiatives, including congress planning, promotional strategy, competitive intelligence, and digital education. She also created physician-facing materials and training assets for medical and commercial teams.

Raghupathy’s foundational experience includes co-founding CUNY Biotech and GRO-Biotech, community-led initiatives aimed at connecting life-science researchers with the biopharma ecosystem. Her academic background features a PhD in Molecular, Cell, and Developmental Biology from the Graduate Center at the City University of New York, where her research focused on gene regulation relevant to advancements in T-cell gene therapy.

As she embarks on this new chapter at Novartis, Raghupathy is poised to make significant contributions to the integration of AI in healthcare, ultimately enhancing patient outcomes and driving innovation in the medical field.

The information in this article is based on a media release from Novartis.

Fiber Broadband Provider Investigates Data Breach Impacting One Million Users

Brightspeed is investigating a potential security breach that may have exposed sensitive data of over 1 million customers, as hackers claim to have accessed personal and payment information.

Brightspeed, one of the largest fiber broadband providers in the United States, is currently investigating claims of a significant security breach that allegedly involves sensitive data tied to more than 1 million customers. The allegations emerged when a group identifying itself as the Crimson Collective posted messages on Telegram, warning Brightspeed employees to check their emails. The group asserts it has access to over 1 million residential customer records and has threatened to release sample data if the company does not respond.

As of now, Brightspeed has not confirmed any breach. However, the company stated that it is actively investigating what it refers to as a potential cybersecurity event. According to the Crimson Collective, the stolen data includes a wide array of personally identifiable information. If these claims are accurate, the data could pose serious risks for identity theft and fraud for affected customers.

Brightspeed has emphasized its commitment to addressing the situation. In a statement shared with BleepingComputer, the company indicated that it is rigorously monitoring threats and working to understand the circumstances surrounding the alleged breach. Brightspeed also mentioned that it will keep customers, employees, and authorities informed as more details become available.

Despite the ongoing investigation, there has been no public notice on Brightspeed’s website or social media channels confirming any exposure of customer data. Founded in 2022, Brightspeed is a U.S. telecommunications and internet service provider that emerged after Apollo Global Management acquired local exchange assets from Lumen Technologies. Headquartered in Charlotte, North Carolina, the company serves rural and suburban communities across 20 states and has rapidly expanded its fiber footprint, reaching over 2 million homes and businesses with plans to extend to over 5 million locations.

Given Brightspeed’s focus on underserved areas, many customers rely on the company as their primary internet provider, making any potential breach particularly concerning. The Crimson Collective is not new to targeting high-profile entities. In October, the group breached a GitLab instance associated with Red Hat, stealing hundreds of gigabytes of internal development data. This incident later had repercussions, as Nissan confirmed in December that personal data for approximately 21,000 Japanese customers was exposed through the same breach.

More recently, researchers have noted that the Crimson Collective has targeted cloud environments, including Amazon Web Services, by exploiting exposed credentials and creating unauthorized access accounts to escalate privileges. This track record adds weight to the group’s claims, making them difficult to dismiss.

Even though Brightspeed has yet to confirm a breach, the mere existence of these claims raises significant concerns. If customer data has indeed been accessed, it could be exploited for phishing scams, account takeovers, or payment fraud. Cybercriminals often act quickly following breaches, which means customers should remain vigilant even before an official notice is issued.

A spokesperson for Brightspeed stated, “We take the security of our networks and the protection of our customers’ and employees’ information seriously and are rigorous in securing our networks and monitoring threats. We are currently investigating reports of a cybersecurity event. As we learn more, we will keep our customers, employees, stakeholders, and authorities informed.”

While the investigation unfolds, customers are encouraged to take proactive steps to protect themselves. Most data breaches lead to similar downstream risks, including phishing scams, account takeovers, and identity theft. Establishing good security habits now can help safeguard online accounts.

Scammers often exploit breach headlines to create panic. Customers should be cautious with emails, calls, or texts that mention internet account billing problems or service changes. If a message creates a sense of urgency or pressure, it is advisable to pause before responding. Avoid clicking on links or opening attachments related to account notices or payment issues. Instead, open a new browser window and navigate directly to the company’s official website or app.

Utilizing strong antivirus software can provide an additional layer of protection against malicious downloads. This software can also alert users to phishing emails and ransomware scams, helping to keep personal information and digital assets secure.

Changing Brightspeed account passwords and reviewing passwords for other important accounts is also recommended. Users should create strong, unique passwords that are not reused elsewhere. A trusted password manager can assist in generating and storing complex passwords, making account takeovers more difficult.

Customers should also check if their email addresses have been exposed in past breaches. Some password managers include built-in breach scanners that can identify whether email addresses or passwords have appeared in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Personal data can quietly circulate across data broker sites. Employing a data removal service can help limit the amount of personal information available publicly. While no service can guarantee complete removal of data from the internet, these services actively monitor and systematically erase personal information from numerous websites, reducing the risk of scammers targeting individuals.

Brightspeed allows customers to activate account and billing alerts through the My Brightspeed site or app. Users can select which notifications they wish to receive via email or text. These alerts can help detect unusual activity early and enable prompt responses to potential threats.

Regularly checking bank and credit card statements is also advisable. Customers should look for small or unfamiliar charges, as criminals may test stolen data with low-dollar transactions before attempting larger fraud. If sensitive information may have been compromised, placing a fraud alert or credit freeze can provide additional protection, making it more challenging for criminals to open new accounts in a victim’s name.

Brightspeed’s investigation is ongoing, and the company has pledged to share updates as more information becomes available. The situation underscores the increasing value of customer data and the aggressive tactics employed by extortion groups targeting infrastructure providers. For customers, exercising caution remains the best defense, while transparency and prompt action will be crucial for companies if these claims prove to be valid.

For more information on protecting personal data and staying informed about cybersecurity threats, visit CyberGuy.com.

WhatsApp Web Malware Automatically Distributes Banking Trojan to Users

A new malware campaign is exploiting WhatsApp Web to spread Astaroth banking trojan through trusted conversations, posing significant risks to users.

A recent malware campaign is transforming WhatsApp Web into a tool for cybercriminals. Security researchers have identified a banking Trojan linked to Astaroth that spreads automatically through chat messages, complicating efforts to halt the attack once it begins. This campaign, dubbed Boto Cor-de-Rosa, highlights the evolving tactics of cybercriminals who exploit trusted communication platforms.

The attack primarily targets Windows users, utilizing WhatsApp Web as both the delivery mechanism and the means of further spreading the infection. The process begins innocuously with a message from a contact containing what appears to be a harmless ZIP file. The file name is designed to look random and benign, which reduces the likelihood of suspicion.

Upon opening the ZIP file, users unwittingly execute a Visual Basic script disguised as a standard document. If the script is run, it quietly downloads two additional pieces of malware, including the Astaroth banking trojan, which is written in Delphi. Additionally, a Python-based module is installed to control WhatsApp Web, allowing the malware to operate in the background without any obvious warning signs. This self-sustaining infection mechanism makes the campaign particularly dangerous.

What sets this campaign apart is its method of propagation. The Python module scans the victim’s WhatsApp contacts and automatically sends the malicious ZIP file to every conversation. Researchers from Acronis have noted that the malware even tailors its messages based on the time of day, often including friendly greetings to make the communication feel familiar. Messages such as “Here is the requested file. If you have any questions, I’m available!” appear to come from trusted contacts, leading many recipients to open them without hesitation.

The malware is also designed to monitor its own effectiveness in real time. The propagation tool tracks the number of successfully delivered messages, failed attempts, and the overall sending speed. After every 50 messages, it generates progress updates, allowing attackers to measure their success quickly and adapt their strategies as needed.

To evade detection by antivirus software, the initial script is heavily obfuscated. Once executed, it launches PowerShell commands that download additional malware from compromised websites, including a known domain, coffe-estilo.com. The malware installs itself in a folder that mimics a Microsoft Edge cache directory, containing executable files and libraries that comprise the full Astaroth banking payload. This allows the malware to steal credentials, monitor user activity, and potentially access financial accounts.

WhatsApp Web’s popularity stems from its ability to mirror phone conversations on a computer, making it convenient for users to send messages and share files. However, this convenience also introduces significant risks. When users connect their phones to WhatsApp Web by scanning a QR code at web.whatsapp.com, the browser session becomes a trusted extension of their account. This means that if malware gains access to a computer with an active WhatsApp Web session, it can act on behalf of the user, reading messages, accessing contact lists, and sending files that appear legitimate.

This exploitation of WhatsApp Web as a delivery system for malware is particularly concerning. Rather than infiltrating WhatsApp itself, attackers take advantage of an open browser session to spread malicious files automatically. Many users remain unaware of the potential dangers, as WhatsApp Web often feels harmless and is frequently left signed in on shared or public computers. In these scenarios, malware does not require sophisticated methods; it simply needs access to a trusted session.

To mitigate the risks associated with this type of malware, users should adopt several smart habits. First and foremost, never open ZIP files sent through chat unless you have confirmed the sender’s identity. Be cautious of file names that appear random or unfamiliar, and treat messages that create a sense of urgency or familiarity as potential warning signs. If a file arrives unexpectedly, take a moment to verify its authenticity before clicking.

Additionally, users should regularly check active WhatsApp Web sessions and log out of any that are unrecognized. Avoid leaving WhatsApp Web signed in on shared or public computers, and enable two-factor authentication (2FA) within WhatsApp settings. Limiting web access can significantly reduce the potential spread of malware.

Keeping devices updated is also crucial. Installing Windows updates promptly and ensuring that web browsers are fully updated can close many vulnerabilities that attackers exploit. Strong antivirus software is essential for monitoring script abuse and PowerShell activity in real time, providing an additional layer of protection against malware.

Banking malware is often associated with identity theft and financial fraud. To minimize the fallout from such attacks, consider reducing your digital footprint. Data removal services can assist in removing personal information from data broker sites, making it harder for criminals to exploit your details if malware infiltrates your device. While no service can guarantee complete data removal from the internet, these services actively monitor and erase personal information from numerous websites, enhancing your privacy.

Even with robust security measures in place, financial monitoring adds another layer of protection. Identity theft protection services can track suspicious activity related to your credit and personal data, alerting you if your information is being sold on the dark web or used to open unauthorized accounts. Setting up alerts for bank and credit card transactions can help you respond quickly to any irregularities.

Most malware infections occur when users act too quickly. If a message feels suspicious, trust your instincts. Familiar names and friendly language can lower your guard, but they should never replace caution. Taking a moment to verify the authenticity of a message or file can prevent significant damage.

This WhatsApp Web malware campaign serves as a stark reminder that cyberattacks are increasingly sophisticated, often blending seamlessly into everyday conversations. The ease with which this threat can spread from one device to many is alarming. A single click can transform a trusted chat into a vehicle for banking malware and identity theft. Fortunately, simple changes in behavior, such as being vigilant about attachments, securing WhatsApp Web access, keeping devices updated, and exercising caution before clicking, can significantly reduce the risk of falling victim to such attacks.

As messaging platforms continue to play a larger role in our daily lives, maintaining awareness and adopting simple security habits is essential. Do you believe messaging apps are doing enough to protect users from malware that spreads through trusted conversations? Share your thoughts with us.

According to Source Name.

India’s Vision for AI Discussed at Washington Embassy Meeting

India’s Deputy Chief of Mission in Washington outlined the nation’s vision for artificial intelligence at a recent event, emphasizing the upcoming AI Impact Summit’s focus on practical outcomes for people, the planet, and progress.

WASHINGTON, DC — India is set to host the AI Impact Summit in New Delhi, which will revolve around three core themes: people, planet, and progress. The summit aims to transition global discussions on artificial intelligence from theoretical principles to actionable outcomes, according to Namgya Khampa, India’s Deputy Chief of Mission in Washington.

Khampa made these remarks during the “US-India Strategic Cooperation on AI” discussion, organized by the Observer Research Foundation America, the Special Competitive Studies Project, and the Embassy of India. The event, held at the US Capitol, convened policymakers and experts to outline shared priorities ahead of the summit.

She emphasized that artificial intelligence has evolved from a niche technology into a fundamental component that shapes economic competitiveness, geopolitical power, and societal outcomes.

India’s approach to AI is deeply rooted in its experience with digital public infrastructure. Khampa highlighted how inclusive, interoperable, and cost-effective technology has the potential to transform governance on a large scale. She pointed to platforms like Aadhaar and the Unified Payments Interface, which have significantly expanded access to public services, finance, and identity for over 1.4 billion Indians.

Khampa described AI as a “force multiplier” that enhances existing digital public infrastructure, making systems smarter, more responsive, productive, and accessible. This perspective aims to shift AI from being an abstract concept to a practical tool that drives transformation in everyday life.

The AI Impact Summit is notable for being the first major global AI summit hosted by a country from the Global South. Khampa stated that the summit seeks to address imbalances in global AI governance by promoting broader participation and ownership, rather than compromising on standards.

She elaborated on the summit’s framework, reiterating the themes of people, planet, and progress, which reflect India’s vision of “AI for all.” According to Khampa, AI should empower individuals rather than marginalize them, be resource-efficient, align with sustainability goals, and foster equitable economic growth, particularly in sectors like healthcare, education, agriculture, and public service delivery.

In light of increasing geopolitical tensions and the weaponization of technology supply chains, Khampa noted that technological resilience has become a central aspect of national strategy. She highlighted the India-US trust initiative as a means to transition cooperation from conceptual discussions to concrete projects across research, standards, skill development, and next-generation technologies.

India’s linguistic diversity and its population-scale digital platforms provide a unique environment for developing inclusive, multilingual AI systems. Meanwhile, the United States contributes cutting-edge research, capital, and advanced use cases that can be tested in India and scaled globally.

As the AI Impact Summit approaches, it is clear that India is positioning itself as a leader in the global dialogue on artificial intelligence, advocating for a vision that prioritizes inclusivity, sustainability, and practical benefits for all.

According to IANS, the summit is expected to set a precedent for future discussions on AI governance and cooperation.

OpenAI Introduces Advertising Features to ChatGPT Platform

OpenAI is set to introduce advertising in ChatGPT for U.S. users on its free and Go-tier plans, marking a significant shift in its revenue strategy.

OpenAI is preparing to test advertisements within ChatGPT, targeting users of its free version and the newly launched Go-tier plan in the United States. This initiative aims to alleviate the financial pressures associated with developing and maintaining advanced artificial intelligence systems.

The company announced on Friday that the ads will begin appearing in the coming weeks, clearly distinguished from the AI-generated responses that users receive. Users subscribed to OpenAI’s higher-tier plans—Plus, Pro, Business, and Enterprise—will not encounter these advertisements.

OpenAI emphasized that the introduction of ads will not affect the quality or integrity of ChatGPT’s responses. Furthermore, user conversations will remain confidential and will not be shared with advertisers.

This move represents a significant shift for OpenAI, which has primarily relied on subscription revenue up to this point. It also highlights the increasing financial challenges the company faces as it invests billions in data centers and prepares for a highly anticipated initial public offering.

Despite currently operating at a loss, OpenAI has projected that it will spend over $1 trillion on AI infrastructure by 2030. However, the company has yet to disclose a detailed plan for funding this extensive expansion.

Industry analysts suggest that advertising could become a vital new revenue stream for ChatGPT, which currently boasts approximately 800 million weekly active users. Nevertheless, they caution that this strategy carries inherent risks, including the potential to alienate users and diminish trust if the ads are perceived as intrusive or poorly integrated.

“If ads come off as clumsy or opportunistic, people won’t hesitate to jump ship,” warned Jeremy Goldman, an analyst at Emarketer. He noted that alternatives like Google’s Gemini or Anthropic’s Claude are readily available to users seeking ad-free experiences.

Goldman also indicated that OpenAI’s decision to incorporate ads could have broader implications for the industry, compelling competitors to clarify their own monetization strategies, particularly those that promote themselves as “ad-free by design.”

OpenAI has assured users that advertisements will not be displayed to individuals under the age of 18 and that sensitive topics, such as health and politics, will be excluded from advertising content.

According to the company, ads will be tested at the bottom of ChatGPT responses when relevant sponsored products or services align with the ongoing conversation. This approach aims to ensure that advertisements are contextually appropriate and minimally disruptive.

Advertisers are increasingly optimistic about AI’s potential to enhance results across search and social media platforms, believing that more sophisticated recommendation systems will lead to more effective and targeted advertising.

Additionally, OpenAI confirmed that its ChatGPT Go plan, initially launched in India, will soon be available in the U.S. at a monthly subscription price of $8.

This new advertising initiative marks a pivotal moment for OpenAI as it seeks to balance user experience with the need for sustainable revenue growth, navigating the challenges of an evolving digital landscape.

For more details, refer to American Bazaar.

Humans in the Loop: Tribal Wisdom and AI Bias Challenges

Independent film ‘Humans in the Loop’ explores the intersection of tribal wisdom and artificial intelligence, highlighting the importance of human input in technology.

Independent films often struggle to find their footing in the vast landscape of mainstream cinema. However, Humans in the Loop (2024), now streaming on Netflix, has carved out a niche for itself, thanks in part to the involvement of executive producer Kiran Rao. The film draws inspiration from a 2022 article by journalist Karishma Mehrotra in FiftyTwo, titled “Human Touch.” It follows the story of Nehma, an Adivasi woman from the Oraon tribe in Jharkhand, who returns to her ancestral village after a broken relationship and faces the challenge of supporting her children.

To make ends meet, Nehma takes a job as a data labeller at an AI data center, where she assigns labels to images and videos to help train AI systems. As she immerses herself in this work, she begins to recognize that the categories she is asked to define and the systems she is contributing to may harbor biases that are disconnected from her cultural understanding of nature, community, and labor.

One of the film’s emotional cores lies in the relationship between Nehma and her daughter, Dhaanu. While Dhaanu is drawn toward the urban world, Nehma feels a strong pull back to her land and traditions. Yet, she is also compelled to embrace this new mode of work. The film captures this dynamic beautifully, avoiding forced sentimentality.

Watching Humans in the Loop evokes a sense of quiet tension, navigating the complexities of place and displacement, tradition and technology, caregiving and coded labor. Viewers find themselves rooting for Nehma not only as a mother striving to support her children but also as a subtle force challenging conventional notions of progress.

The film employs contrasting spaces to enhance its narrative: the lush, vibrant village juxtaposed with the sterile, screen-filled environment of the data lab. These visual contrasts underscore the film’s exploration of loops—nature versus technology, labor versus identity, home versus exile. The sound design is particularly evocative, intertwining the natural sounds of the forest with the digital hum of the lab, creating a soulful auditory backdrop.

In addressing the theme of AI’s potential to enhance tribal lives, the film does not take an anti-AI stance. Instead, it posits that when AI systems integrate the labor, perspectives, and knowledge of tribal communities, they can become tools of recognition and empowerment. Nehma’s insistence on shaping the labels and incorporating her lived ecological knowledge into the system illustrates that technology can serve as a site of agency rather than mere extraction.

This hopeful loop suggests that humans can train machines, and in turn, the outputs of these machines can reflect that training. Nehma’s journey emphasizes that individuals can learn not only to survive but also to assert their knowledge. When approached ethically and collaboratively, AI can become part of a cycle of continuity, serving not as a break from tradition but as a tool to sustain and evolve it.

Titled after the human-in-the-loop (HITL) approach, which actively integrates human input and expertise into machine learning and AI systems, Humans in the Loop stands as a quietly significant film. Director Aranya Sahay has crafted a narrative that speaks to the age of AI while honoring the human experience—the laborer, the mother, the land. As discussions surrounding AI and equity continue to grow, this film is poised to resonate even more deeply over time, according to India Currents.

GTA 6 Online Mode Details Leaked in Court Documents

New details about GTA 6’s online mode have emerged from court documents, suggesting the game may feature 32-player lobbies ahead of its anticipated release on November 19, 2026.

New insights into the online mode of Grand Theft Auto VI (GTA 6) have surfaced from court documents related to a legal dispute involving Rockstar Games and its former employees. This information, which has not been officially confirmed by Rockstar, offers a glimpse into the multiplayer component of the highly anticipated game, set to be released on November 19, 2026, for PlayStation 5 and Xbox Series X/S.

Rockstar has maintained a tight lid on the details surrounding GTA 6’s multiplayer features. However, recent revelations from a tribunal in the UK indicate that the game may support up to 32 players in a single session, mirroring the current setup in GTA Online.

The details emerged during a legal hearing concerning the termination of over 30 developers at Rockstar, which is tied to allegations of leaking confidential information on a private Discord channel associated with the Independent Workers’ Union of Great Britain (IWGB). During the proceedings, Rockstar disclosed that certain internal messages discussed game features deemed “top secret.” Among these was a reference to a “large session” involving 32 players, which many have interpreted as a significant hint regarding the online mode.

According to the court documents, the leaked information stemmed from internal Discord messages where a former employee noted that Rockstar faced challenges in organizing playtests due to the need for 32-player sessions. Another developer questioned the difficulty of arranging such sessions, suggesting that multiple studios with quality assurance testers should be able to manage it.

While Rockstar has yet to officially confirm any multiplayer features for GTA 6, the leak aligns with the existing 32-player limit in GTA Online, providing one of the clearest indications of the online ambitions for the upcoming title.

Fans of the franchise have high expectations for GTA 6 Online, particularly given the success of GTA Online, which set a high standard for open-world multiplayer experiences. Many anticipate that the new installment will introduce innovative mechanics, expansive maps, fresh missions, and enhanced social features. Currently, the only confirmed detail is the proposed 32-player limit for at least one type of online session.

In the midst of these developments, Rockstar has defended its decision to terminate the employees, asserting that the dismissals were due to the leaking of confidential information rather than any union-related activities. The company claims that sharing sensitive game details violated internal policies. Conversely, the IWGB and the dismissed developers contend that the firings were unjust and linked to union activism.

A recent ruling by a UK judge determined that Rockstar is not obligated to provide interim back pay to the terminated staff, which supports the studio’s position regarding confidentiality breaches.

The significance of the 32-player detail lies in its origin; it comes from official court documents rather than speculative leaks. While this number may seem modest compared to earlier rumors of larger player limits, it suggests that Rockstar may be adopting a familiar multiplayer structure as a foundation for GTA 6.

It remains uncertain whether the online mode will launch with additional player limits or game modes that could accommodate more than 32 players. Rockstar has not publicly commented on these possibilities. For now, this insight derived from court proceedings offers fans their first credible look at the multiplayer potential of GTA 6 as the release date approaches.

As anticipation builds, Rockstar has officially confirmed that GTA 6 will be available on November 19, 2026, for PS5 and Xbox Series X/S, with expectations for additional platform releases to follow. Fans are eagerly awaiting what is poised to be one of the most significant gaming releases in recent years, according to The Sunday Guardian.

Taiwan Plans $250 Billion Investment in U.S. Semiconductor Manufacturing

Taiwan has committed to investing $250 billion in U.S. semiconductor manufacturing, aiming to enhance domestic production capabilities and reduce reliance on foreign supply chains.

The U.S. Department of Commerce announced on Thursday that Taiwan will invest $250 billion to bolster semiconductor manufacturing in the United States. This significant deal, signed during the Trump administration, aims to enhance domestic production capabilities in a sector critical to both the economy and national security.

Under the agreement, Taiwanese semiconductor and technology companies will make direct investments in the U.S. semiconductor industry. These investments are expected to cover a range of areas, including semiconductors, energy, and artificial intelligence (AI) production and innovation. Currently, Taiwan is responsible for producing more than half of the world’s semiconductors, highlighting its pivotal role in the global supply chain.

In addition to the direct investments, Taiwan will provide $250 billion in credit guarantees to facilitate further investments from its semiconductor and tech enterprises. However, the timeline for these investments remains unspecified.

In exchange for Taiwan’s substantial investment, the United States plans to invest in various sectors within Taiwan, including semiconductor manufacturing, defense, AI, telecommunications, and biotechnology. The specific amount of this reciprocal investment has not been disclosed.

This announcement follows a proclamation from the Trump administration that reiterated the U.S. goal of increasing domestic semiconductor manufacturing. The proclamation emphasized that reliance on foreign supply chains poses significant economic and national security risks. “Given the foundational role that semiconductors play in the modern economy and national defense, a disruption of import-reliant supply chains could strain the United States’ industrial and military capabilities,” it stated.

Additionally, the proclamation introduced a 25% tariff on certain advanced AI chips and indicated that further tariffs on semiconductors would be considered once trade negotiations with other countries, including the deal with Taiwan, are finalized.

In 2025, semiconductor manufacturing has become a focal point of Trump’s economic agenda, with efforts aimed at reducing U.S. dependence on foreign chip production. The administration has proposed aggressive trade measures, including a potential 100% tariff on imported semiconductors, although companies that commit to establishing manufacturing facilities in the U.S. may be eligible for exemptions.

Last year, Taiwan Semiconductor Manufacturing Company (TSMC) announced plans to invest $100 billion to enhance chip manufacturing capabilities in the United States, further underscoring the importance of this sector.

Semiconductors are essential components of modern technology, powering a wide array of devices, from smartphones and automobiles to telecommunications equipment and military systems. The U.S. share of global wafer fabrication has significantly declined, dropping from 37% in 1990 to less than 10% in 2024. This shift has largely been attributed to foreign industrial policies that favor production in East Asia.

As the U.S. seeks to reclaim its position in the semiconductor industry, the partnership with Taiwan represents a critical step towards enhancing domestic manufacturing capabilities and securing supply chains.

This initiative reflects a broader strategy to strengthen the U.S. economy and safeguard national interests in an increasingly competitive global landscape, according to The American Bazaar.

RCB Introduces AI Solution for Crowd Management at Chinnaswamy Stadium

RCB is investing Rs 4.5 crore in an AI-enabled project to enhance crowd management and safety at M. Chinnaswamy Stadium during IPL 2026.

Royal Challengers Bangalore (RCB) is taking a significant step towards improving the matchday experience at M. Chinnaswamy Stadium by investing Rs 4.5 crore in an innovative project aimed at crowd management and safety.

In partnership with Staqu, a technology firm specializing in artificial intelligence, RCB plans to implement advanced facial recognition and intelligent monitoring systems. This initiative is designed to enhance public safety and ensure a seamless experience for fans attending matches.

The deployment of these technologies is expected to address crowd-related issues that have been a concern in previous seasons. By utilizing AI, RCB aims to streamline entry processes and monitor crowd behavior effectively, thereby reducing the likelihood of incidents and improving overall security.

As the Indian Premier League (IPL) continues to grow in popularity, the need for enhanced safety measures has become increasingly important. RCB’s proactive approach reflects a commitment to not only provide an enjoyable atmosphere for fans but also to prioritize their safety during events.

With the introduction of this AI-enabled solution, RCB hopes to set a new standard for crowd management in sports venues across India. The project signifies a forward-thinking approach to leveraging technology in enhancing the spectator experience.

According to NDTV, the collaboration with Staqu marks a significant investment in the future of sports management, showcasing RCB’s dedication to innovation and fan engagement.

Can Autonomous Trucks Enhance Highway Safety and Reduce Accidents?

Kodiak AI’s autonomous trucks have successfully driven over 3 million miles, demonstrating the potential for self-driving technology to enhance highway safety in real-world conditions.

Kodiak AI, a prominent player in the field of AI-powered autonomous driving technology, has been quietly proving the viability of self-driving trucks on actual highways. The company’s flagship system, known as the Kodiak Driver, integrates advanced software with modular, vehicle-agnostic hardware, creating a cohesive platform designed for the complexities of real-world trucking.

As Kodiak AI explains, the Kodiak Driver is not just a theoretical solution; it is built to address the challenges of highways, varying weather conditions, driver fatigue, and the demands of long-haul transportation. This practical approach is essential, as trucking is far from a controlled laboratory environment.

In a recent episode of CyberGuy’s “Beyond Connected” podcast, Kurt spoke with Daniel Goff, vice president of external affairs at Kodiak AI, about the evolving perceptions surrounding autonomous trucks. Goff reflected on the initial skepticism the company faced when it was founded in 2018. “When I first started at the company, I said I worked for a company that was working to build trucks that drive themselves, and people kind of looked at me like I was crazy,” he recalled. However, he noted a significant shift in public sentiment as autonomous vehicles have begun to demonstrate their capabilities beyond mere hype.

One of Kodiak AI’s key arguments is that machines can mitigate many risks associated with human driving. Goff emphasized, “This technology doesn’t get distracted. It doesn’t check its phone. It doesn’t have a bad day to take it out on the road. It doesn’t speed.” In the trucking industry, where safety is paramount, these “boring” characteristics of autonomous vehicles can be advantageous.

Kodiak AI has been actively operating freight routes for several years, rather than solely conducting tests in controlled environments. The company has a command center in Lancaster, Texas, which has facilitated deliveries to cities such as Houston, Oklahoma City, and Atlanta since 2019. During these operations, a safety driver is present to take control if necessary, allowing Kodiak to refine its technology in real-world conditions.

Long-haul trucking is crucial to the U.S. economy, yet it is also one of the most demanding and hazardous professions. Drivers often spend extended periods away from home, working long hours while managing heavy vehicles under various conditions. Goff pointed out that the job’s challenges are compounded by federal regulations that limit driving hours to reduce fatigue. “Driving a truck is one of the most difficult and dangerous jobs that people do in the United States every day,” he said. With a growing number of drivers retiring and fewer individuals entering the profession, the industry is experiencing a significant driver shortage.

Kodiak AI believes that autonomous technology is best suited for the most challenging and repetitive tasks within trucking. Goff explained, “The goal for this technology is really best suited for those really tough jobs—the long lonely highway miles, the trucking in remote locations where people either don’t want to live or can’t easily live.” He also noted that many trucks are idle for a significant portion of the day, with the average truck being driven only about seven hours daily. Autonomous technology could help optimize this by enabling trucks to operate around the clock, only stopping for refueling and safety inspections.

With over 3 million miles driven, Kodiak AI has established a strong safety record, with a safety driver present for most of those miles. Goff highlighted the scale of their operations by comparing it to the average American’s lifetime driving distance of approximately 800,000 miles. “We’re at almost four average lifetimes with our system today,” he stated. The company also utilizes computer simulations and various assessments to evaluate the safety of its system.

In addition to long-haul operations, Kodiak AI collaborates with Atlas Energy Solutions for oil logistics in the Permian Basin. As of the third quarter of 2025, the company has delivered ten driverless trucks to Atlas, which autonomously transport sand around the clock without a human operator in the cab. Goff described this partnership as an ideal environment for testing and refining their long-haul operations.

Kodiak AI has sought third-party validation of its safety claims, including a study with Nauto, a leader in AI-enabled dashcams. The results indicated that Kodiak’s system achieved the highest safety score recorded by Nauto.

Policy and regulation also play a critical role in the adoption of autonomous trucking. Goff noted that 25 states have enacted laws allowing for the deployment of autonomous vehicles. He believes that the inherent dangers of driving make a compelling case for the technology. “People who think about transportation every day understand how dangerous driving a car is, driving a truck is, and just being on the road see the potential for this technology,” he said.

Despite the advancements, concerns about safety remain prevalent among advocates and everyday drivers. Critics question whether autonomous systems can respond adequately in emergencies or handle unpredictable human behavior on the road. Goff acknowledged these concerns, stating, “In this industry in particular, we really understand how important it is to be safe.” He emphasized that trust in autonomous systems must be earned through consistent real-world performance and transparent testing.

For everyday drivers, the prospect of sharing the road with autonomous vehicles can be unsettling, especially given the focus on potential failures in media coverage. However, Kodiak AI argues that the removal of human factors such as fatigue and distraction could lead to safer highways. If the technology continues to perform as claimed, it could result in fewer tired drivers on overnight routes, more reliable freight movement, and ultimately safer roads for all users.

As Kodiak AI continues to move freight and gather safety data on public roads, skepticism remains a vital aspect of the conversation surrounding autonomous trucking. The future of this technology will depend on its ability to demonstrate long-term safety benefits and earn the trust of the public, rather than relying on promises alone. The pressing question is no longer whether self-driving trucks can operate effectively, but whether they can consistently prove to enhance safety for everyone on the road.

For further insights, refer to CyberGuy.

Google Launches Program to Support Indian AI Startups Going Global

Google has launched a new Market Access Program aimed at helping Indian AI startups scale globally, coinciding with the projected growth of India’s AI market to $126 billion by 2030.

With India’s artificial intelligence (AI) market projected to reach $126 billion by 2030, Google has introduced a new Market Access Program designed to assist Indian AI startups in scaling their operations and expanding into global markets.

Announced during the Google AI Startups Conclave in New Delhi, the program aims to support startups from their initial seed stage to full-scale operations. Preeti Lobana, Vice President and Country Manager for Google India, emphasized the importance of this initiative, stating, “If you solve for India, you build for the world. Our focus now is accelerating how quickly Indian startups can scale, reach global markets, and deliver outcomes.”

Lobana noted that India’s AI startup ecosystem is entering a transformative phase, moving from prototypes to market-ready products and transitioning from early traction to sustainable business models. Google’s comprehensive support for startups encompasses capability building, real-world deployment, and scaling, addressing challenges at every critical stage of development.

The Market Access Program is specifically tailored for AI-first startups that are prepared to scale responsibly. It focuses on three key outcomes: enhancing enterprise readiness through global selling expertise, providing access to Google’s extensive enterprise network, and facilitating global immersion in key international markets.

To bolster the capabilities of these startups, Google also announced the upcoming Global AI Hub in Visakhapatnam. This facility, which will be powered by green energy and feature 1-gigawatt computing resources, is designed to equip startups with the high-performance computing necessary to refine their AI models on a global scale.

In addition to the Market Access Program, Google unveiled new updates to its Gemma model family, specifically targeting areas of rapid adoption in India, such as population-scale healthcare AI and action-oriented, on-device agents. The latest iteration, MedGemma 1.5, enhances Google’s health-focused AI initiatives by enabling developers to create applications that support complex medical imaging workflows.

The release of MedGemma 1.5 follows a collaboration between Google and the All India Institute of Medical Sciences (AIIMS), which is utilizing the model to develop India’s Health Foundation Models. This partnership contributes to the country’s Digital Public Infrastructure and enhances health outcomes across the ecosystem.

To support the growing demand for agent-based systems, Google introduced FunctionGemma, a specialized version of the Gemma 3 model. FunctionGemma is designed for function calling, allowing startups to translate natural language commands into executable actions. This capability enables the development of on-device, low-latency applications with automated workflows that prioritize user privacy and can function effectively on low-end devices without a constant internet connection.

Together, these advancements expand the toolkit available to Indian founders, facilitating the transition from experimentation to deployment across healthcare, enterprise, and consumer applications at scale. Lobana highlighted that these models are supported by popular tools throughout the development workflow, including Hugging Face Transformers, Unsloth, Keras, and NVIDIA NeMo.

Alongside the Conclave, Inc42 released the “Bharat AI Startups Report 2026,” which was supported by Google. The report reveals a significant shift in the AI ecosystem, with 47% of enterprises already moving from pilot projects to full production. It also notes a decrease in innovation costs, as historically high computing expenses have hindered Indian startups. With public resources lowering entry barriers, funding is increasingly directed toward product innovation rather than infrastructure costs.

India’s unique challenges, including its 22 languages, inconsistent connectivity, and price sensitivity, have often been viewed as obstacles. However, the report reframes these challenges as assets, suggesting that if an AI solution can effectively serve rural users in India, it is robust enough for global markets. The concept of “Bharat-tested” technology is emerging as a new benchmark for resilience.

The competitive landscape is shifting towards trust-by-design, with startups that prioritize safety, privacy, and security from the outset gaining a significant advantage in securing long-term enterprise contracts.

Ultimately, the success of AI initiatives will be measured by their outcomes. Examples include Cloudphysician, which has reduced ICU mortality rates by 40%, and Rocket Learning, which personalizes education for millions of students. Lobana concluded, “By stitching together skilling, capital, infrastructure, and market access, we are clearing the path for founders. As we look to the AI Impact Summit in February, the signal is clear: The future of AI isn’t just being used in India; it is being built here.”

According to Inc42, the launch of the Market Access Program marks a pivotal moment for Indian AI startups, positioning them to thrive in a rapidly evolving global landscape.

NASA’s Artemis II Mission Marks First Crewed Deep Space Flight in Over 50 Years

NASA is set to launch Artemis II on February 6, marking the return of humans to deep space for the first time in over 50 years with a historic 10-day mission around the Moon.

NASA has announced that it will return humans to deep space next month, targeting a launch date of February 6 for Artemis II. This 10-day crewed mission will carry astronauts around the Moon for the first time in more than half a century.

“We are going — again,” NASA stated in a post on X, confirming that the mission is scheduled to depart no earlier than February 6. The first available launch window will run from January 31 to February 14, with specific launch opportunities on February 6, 7, 8, 10, and 11.

If the launch is delayed, additional windows will open from February 28 to March 13, and from March 27 to April 10. During the February window, opportunities will be available on March 6, 7, 8, 9, and 11, while the April window will offer chances on April 1, 3, 4, 5, and 6.

The mission is set to lift off from Launch Complex 39B at NASA’s Kennedy Space Center in Florida, aboard the Space Launch System (SLS), the most powerful rocket the agency has ever constructed. Preparations are already underway to move the rocket to the launch pad, with the rollout expected to begin no earlier than January 17. This process involves a four-mile journey from the Vehicle Assembly Building to Launch Pad 39B aboard the crawler-transporter 2, which is anticipated to take up to 12 hours.

“We are moving closer to Artemis II, with rollout just around the corner,” said Lori Glaze, acting associate administrator for NASA’s Exploration Systems Development Mission Directorate. “We have important steps remaining on our path to launch, and crew safety will remain our top priority at every turn as we near humanity’s return to the Moon.”

The 322-foot rocket will carry four astronauts beyond Earth’s orbit to test the Orion spacecraft in deep space for the first time with a crew on board. This mission represents a significant milestone following the Apollo era, which last sent humans to the Moon in 1972.

The Artemis II crew includes NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with Canadian Space Agency astronaut Jeremy Hansen. This mission will be notable for being the first lunar mission to include a Canadian astronaut and the first to carry a woman beyond low Earth orbit.

After launch, the astronauts are expected to spend approximately two days near Earth to check Orion’s systems before igniting the spacecraft’s European-built service module to begin their journey toward the Moon.

This maneuver will send the spacecraft on a four-day trip around the far side of the Moon, tracing a figure-eight path that will take the crew more than 230,000 miles from Earth and thousands of miles beyond the lunar surface at its farthest point.

Rather than firing engines to return home, Orion will utilize a fuel-efficient free-return trajectory that leverages the gravitational forces of both Earth and the Moon to guide the spacecraft back to Earth during the roughly four-day return trip.

The mission will conclude with a high-speed reentry and splashdown in the Pacific Ocean off the coast of San Diego, where recovery teams from NASA and the Department of Defense will be on hand to retrieve the crew.

Artemis II follows the uncrewed Artemis I mission and is a crucial test of NASA’s deep-space systems before astronauts attempt a lunar landing on a future flight. NASA emphasizes that this mission is a key step toward long-term lunar exploration and eventual crewed missions to Mars, according to Fox News.

BioMarin Appoints Indian-American Arpit Davé as Chief Digital Officer

BioMarin Pharmaceutical Inc. has appointed Arpit Davé as its new Chief Digital and Information Officer, tasked with enhancing the company’s technology strategy and digital transformation efforts.

BioMarin Pharmaceutical Inc., a prominent global biotechnology firm specializing in rare diseases, has announced the appointment of Arpit Davé as Executive Vice President and Chief Digital and Information Officer. This newly created position underscores the company’s commitment to advancing its enterprise technology strategy.

In his role, Davé will focus on reimagining and executing BioMarin’s technology initiatives, data science, and digital transformation efforts. His leadership is expected to create significant value for patients, employees, and shareholders, as stated by the San Rafael, California-based company.

With over 20 years of experience in information technology and artificial intelligence (AI) within the biopharmaceutical sector, Davé is recognized as a proven leader. His career has been marked by a strong track record of driving patient-centered organizations toward measurable business growth and profitability.

Before joining BioMarin, Davé served as a technology executive at Amgen, Inc. for the past seven and a half years. In his most recent roles there, he led teams focused on digital transformation through AI and innovative digital technologies, positioning the company to remain competitive in an evolving landscape.

Davé’s previous experience includes leadership roles at Bristol Myers Squibb and Merck, where he concentrated on CIO leadership, data science, and research and development.

He holds a Master of Science in Industrial Engineering from the University of Texas and a Bachelor of Science in Mechanical Engineering from Sardar Patel University in Gujarat, India.

“Arpit is a visionary thinker and talented leader who brings to this role a deep understanding of the biopharmaceutical industry and a track record of using technology and AI to deliver for patients and the business,” said Alexander Hardy, President and Chief Executive Officer of BioMarin.

Hardy emphasized that Davé will be responsible for building a strategic vision and roadmap, deploying technologies that will enhance and differentiate BioMarin’s operations across various sectors, including research and development, manufacturing, and commercial organizations.

Expressing his enthusiasm for the new role, Davé stated, “I have long admired BioMarin’s dedication to people living with rare diseases, and I am excited to work as part of this team to create undeniable value for patients, employees, and shareholders.”

He further added, “I am honored to join BioMarin at this pivotal moment where the convergence of biology, data, and AI offers unprecedented potential; my focus will be on empowering our world-class teams and driving innovation to translate these capabilities into faster insights and the accelerated delivery of life-changing therapies to the patients who depend on us.”

Founded in 1997, BioMarin has established a strong reputation for innovation, boasting eight commercial therapies and a robust clinical and preclinical pipeline, according to the company’s release.

This strategic appointment reflects BioMarin’s ongoing commitment to leveraging technology and data to enhance its operations and improve patient outcomes.

According to The American Bazaar, Davé’s leadership is expected to play a crucial role in the company’s future endeavors.

CloudSEK Receives $10 Million Investment from Connecticut Innovations

CloudSEK, an Indian cybersecurity firm, has secured a $10 million investment from Connecticut Innovations, marking a significant milestone as the first Indian-origin cybersecurity company to receive funding from a U.S. state fund.

CloudSEK, a Bengaluru-based cybersecurity firm specializing in predictive cyber threat intelligence, has announced a strategic investment of $10 million from Connecticut Innovations (CI), the venture capital arm of the State of Connecticut. This funding is part of the company’s Series B2 round and positions CloudSEK as the first Indian-origin cybersecurity company to receive backing from a U.S. state-backed venture fund.

The investment follows CloudSEK’s previous fundraising efforts, which included $19 million raised in its Series B1 round. With the completion of this latest tranche, the company has successfully concluded its Series B funding round, which consists of both primary and secondary capital.

“Becoming the first Indian-origin cybersecurity company to receive backing from a U.S. state fund is a milestone for CloudSEK, as well as for the entire Indian cybersecurity ecosystem,” said Rahul Sasi, co-founder and CEO of CloudSEK. He emphasized the significance of this achievement for the company and the broader Indian cybersecurity landscape.

With Connecticut serving as its U.S. anchor, CloudSEK is committed to job creation, localized research investment, and enhancing cyber resilience in the Western world. Sasi expressed pride in advancing the company’s identity as a truly Indo-American cybersecurity firm.

The partnership with CI was established after CloudSEK distinguished itself as a leading startup at VentureClash, CI’s global investment pitch competition. “At our 2025 VentureClash India pitch event, CloudSEK distinguished itself as a truly innovative provider of cybersecurity and predictive threat capabilities used by hundreds of businesses around the world,” stated Alison Malloy, Managing Director of Investments at Connecticut Innovations.

CloudSEK plans to utilize this investment to accelerate its expansion in the U.S., with intentions to establish a regional hub for operations, talent, and partnerships in Connecticut. The company aims to onboard strategic local talent and forge collaborations with corporate partners, universities, and research institutions throughout the state.

The funding from CI will enable CloudSEK to recruit top-tier cybersecurity and AI talent from the region, establish partnerships with local academic and research institutions, build its U.S. headquarters in Connecticut, and drive region-specific cybersecurity research and innovation.

This landmark investment not only enhances CloudSEK’s global trajectory but also symbolizes the growing prominence of Indian cybersecurity innovation on the world stage. By solidifying its presence in Connecticut and continuing to expand globally, CloudSEK is well-positioned to bolster cyber resilience across continents and redefine cross-border technology collaboration.

Prior to this investment round, CloudSEK’s Series B1 was led by U.S.-based strategic investor Commvault, with participation from MassMutual Ventures, Inflexor Ventures, Prana Ventures, and Tenacity Ventures. Early investors, including the Meeran Family (Eastern Group), StartupXSeed, Neon Fund, and Exfinity Ventures, continue to support the company’s long-term growth.

In addition to this funding, CloudSEK recently announced a strategic partnership with Seed Group, a company of The Private Office of Sheikh Saeed bin Ahmed Al Maktoum, aimed at delivering predictive cyber intelligence and AI-attack detection capabilities to organizations across the UAE.

Founded in 2015 by Sasi, a cybersecurity researcher-turned-entrepreneur, CloudSEK has evolved from a research-first initiative into a leading cyber threat intelligence platform, serving over 300 enterprises across various sectors, including banking, financial services, insurance (BFSI), healthcare, technology, and government.

This investment marks a pivotal moment for CloudSEK and highlights the increasing collaboration between Indian tech firms and U.S. state-backed initiatives, paving the way for future innovations in the cybersecurity domain, according to The American Bazaar.

Robots Designed to Feel Pain Show Faster Reactions Than Humans

Scientists have developed a neuromorphic robotic e-skin that enables robots to detect harmful contact and react faster than humans, enhancing safety and interaction in various environments.

Touch something hot, and your hand instinctively pulls back before your brain even registers the pain. This rapid response is crucial in preventing injury. In humans, sensory nerves send immediate signals to the spinal cord, which triggers muscle reflexes. However, most robots currently lack this quick reaction capability. When a humanoid robot encounters something harmful, sensor data typically travels to a central processor, where it is analyzed before instructions are sent back to the motors. This delay can lead to broken parts or dangerous situations, particularly as robots become more integrated into homes, hospitals, and workplaces.

To address this challenge, scientists at the Chinese Academy of Sciences, along with collaborating universities, have developed a neuromorphic robotic e-skin, or NRE-skin. Unlike traditional robotic skins that merely detect touch, this innovative e-skin mimics the human nervous system, allowing robots to sense both contact and potential harm.

The e-skin consists of four layers that replicate the structure and function of human skin and nerves. The outermost layer serves as a protective covering, akin to the epidermis. Beneath this layer, sensors and circuits function like sensory nerves, continuously sending small electrical pulses to the robot every 75 to 150 seconds to confirm that everything is functioning normally. If the skin is damaged, this pulse ceases, alerting the robot to the injury’s location.

When the e-skin experiences normal contact, it sends neural-like spikes to the robot’s central processor for interpretation. However, if the pressure exceeds a predetermined threshold, the skin generates a high-voltage spike that bypasses the central processor and goes directly to the motors. This allows the robot to react instantly, pulling its arm away in a reflexive manner, similar to a human’s response to touching a hot surface. The pain signal is only activated when the contact is genuinely dangerous, preventing unnecessary overreactions.

This local reflex system not only reduces the risk of damage but also enhances safety and makes interactions with robots feel more natural. The e-skin’s design incorporates modular magnetic patches that can be easily replaced. If a section of the skin is damaged, an owner can simply remove the affected patch and snap in a new one, eliminating the need to replace the entire surface. This modular approach saves time, reduces costs, and extends the operational lifespan of robots.

As service robots increasingly work in close proximity to people, such as assisting patients or helping older adults, the ability to sense touch, pain, and injury becomes vital. This heightened awareness fosters trust and minimizes the risk of accidents caused by delayed reactions or sensor overload. The research team emphasizes that their neural-inspired design significantly improves robotic touch, safety, and intuitive human-robot interaction, marking a crucial step toward creating robots that behave more like responsive partners rather than mere machines.

The next challenge for researchers is to enhance the e-skin’s sensitivity, enabling it to recognize multiple simultaneous touches without confusion. If successful, this advancement could allow robots to perform complex physical tasks while remaining vigilant to potential dangers across their entire surface, bringing humanoid robots closer to instinctual behavior.

While the idea of robots that can feel pain may initially seem unsettling, it ultimately serves the purpose of protection, speed, and safety. By emulating the human nervous system, scientists are equipping robots with faster reflexes and improved judgment in the physical world. As robots become more integrated into daily life, these instinctual capabilities could prove to be transformative.

Would you feel more at ease around a robot capable of sensing pain and reacting instantly, or does this concept raise new concerns for you? Share your thoughts with us at Cyberguy.com.

According to CyberGuy, the development of this technology represents a significant leap forward in robotic capabilities.

Walmart Appoints Indian-American Shishir Mehrotra to Company Board

Walmart has appointed Shishir Mehrotra, CEO of Superhuman, to its Board of Directors as the retail giant prepares for an agentic AI future.

Walmart Inc. has announced the appointment of Shishir Mehrotra, an Indian American technology veteran and current CEO of Superhuman, to its Board of Directors. This move comes as the retail giant positions itself for an agentic AI future.

Mehrotra will contribute to both the Compensation and Management Development Committee and the Technology and eCommerce Committee, as stated by the Bentonville, Arkansas-based company.

Greg Penner, chairman of Walmart’s Board of Directors, expressed enthusiasm about Mehrotra’s addition, saying, “Our focus remains on serving customers through a people-led, tech-powered approach. Shishir’s background adds to our boardroom the insight of a proven builder, offering a distinguished track record scaling platforms relied upon by millions.”

Randall Stephenson, the lead independent director, echoed this sentiment, highlighting Mehrotra’s unique skill set. “Shishir brings a rare combination of technical depth and product leadership. He has helped create and scale platforms that unlock creativity and productivity for people and teams at global scale. We’re excited to welcome him to our Board,” he remarked.

In response to his appointment, Mehrotra stated, “I have long admired Walmart’s ability to innovate while staying true to its core values, and joining the Board as the company builds for an agentic AI future is a rare opportunity. This era is the most significant technological shift I’ve seen in my career, and I look forward to working with the team to shape the future for the millions of people Walmart serves.”

Mehrotra brings over 25 years of experience in the technology sector, with a proven track record of building category-defining platforms. Before his role at Superhuman, an email application designed for productivity enhancement, he co-founded Coda, a productivity and AI platform that successfully served millions of users and tens of thousands of teams.

Prior to founding Coda, Mehrotra held significant positions at YouTube, serving as both Chief Product Officer and Chief Technology Officer. During his tenure, he played a crucial role in transforming YouTube into the world’s largest video platform and one of Google’s most significant and rapidly growing businesses, catering to a new generation of creators.

Mehrotra holds a dual Bachelor of Science degree in mathematics and computer science from the Massachusetts Institute of Technology.

Walmart serves approximately 270 million customers and members each week across more than 10,750 stores and various eCommerce websites in 19 countries. The company reported a fiscal year 2025 revenue of $681 billion and employs around 2.1 million associates globally, according to the company’s release.

This strategic appointment reflects Walmart’s commitment to integrating advanced technology into its operations and enhancing customer service as it navigates the evolving landscape of retail.

According to The American Bazaar, Mehrotra’s expertise will be invaluable as Walmart continues to innovate and adapt in a rapidly changing market.

Jumio Appoints Indian-American Bala Kumar as President and Interim CEO

Jumio has appointed Bala Kumar as president and interim CEO, focusing on eradicating identity theft while enhancing digital interactions as the company prepares for its next phase of growth.

Jumio, a prominent provider of AI-powered identity intelligence solutions, has announced the appointment of Indian American executive Bala Kumar as its president and interim chief executive officer. This leadership change comes as the company aims to strengthen its position in a rapidly evolving market.

Kumar, who holds a master’s degree in Computer Applications from the National Institute of Technology Karnataka and has completed the Harvard Leadership Direct program, takes over from Robert Prigge. Prigge has led the company for nearly a decade and is departing to pursue new opportunities.

The transition in leadership is described by Jumio as a planned evolution, designed to ensure continuity and effective execution as the company embarks on its next phase of expansion. The firm is focused on maintaining its momentum in the identity verification and biometrics market.

Having joined Jumio in 2021, Kumar previously served as the chief product and technology officer. In this capacity, he successfully expanded Jumio’s offerings from a single product to a comprehensive portfolio of identity intelligence solutions, addressing the evolving needs of customers. He will continue to guide the company’s product vision and innovation.

Ben Cukier, co-chairman of Jumio’s board of directors, expressed confidence in Kumar’s capabilities. “This transition reflects the strength of our leadership bench and the company’s focus on disciplined execution,” Cukier stated. “With deep institutional knowledge and a proven track record of delivering results, Bala is exceptionally well-positioned to lead the company with full authority during this period while we conduct a thoughtful search for a CEO to fuel the next phase of Jumio’s growth.”

Kumar expressed his enthusiasm for his new role, stating, “I am honored to step into this role. We have a strong foundation, a clear strategy, and an incredibly talented team. My focus is on executing our strategy in service of our customers and Jumio’s core mission: eradicating identity theft while enabling trusted, low-friction digital interactions for consumers and businesses both now and in the future.”

The Jumio Platform offers AI-powered identity intelligence that integrates biometric authentication, automation, and data-driven insights. This technology is designed to accurately establish, maintain, and reassert trust throughout the customer journey, from account opening to ongoing monitoring.

Utilizing advanced automated technology, including biometric screening, AI and machine learning, liveness detection, and no-code orchestration with hundreds of data sources, Jumio aims to combat fraud and financial crime. The platform also facilitates faster customer onboarding and ensures compliance with regulatory requirements, including Know Your Customer (KYC) and Anti-Money Laundering (AML) standards.

With a global presence that includes offices in North America, Latin America, Europe, Asia Pacific, and the Middle East, Jumio has processed over one billion transactions across more than 200 countries and territories, encompassing real-time web and mobile transactions.

This strategic appointment of Bala Kumar as president and interim CEO marks a significant step for Jumio as it continues to innovate and lead in the identity verification space, ensuring a secure digital environment for businesses and consumers alike.

According to The American Bazaar, this leadership change positions Jumio for continued growth and success in the identity intelligence sector.

Laurent Simons: The Controversial Journey of a Child Prodigy in Human Enhancement

Laurent Simons, a Belgian prodigy who earned his PhD at 15, is now navigating the contentious field of human enhancement through artificial intelligence and medical science.

A doctoral degree earned at the age of 15 is not, by itself, a scientific breakthrough. It is a personal milestone—rare and extraordinary—often framed as a story of exceptional intellect rather than institutional transformation. However, when such an achievement is followed by an explicit ambition to reshape human biology through artificial intelligence, the narrative shifts from mere curiosity to significant consequence.

This is the case with Laurent Simons, a Belgian prodigy whose academic trajectory has unfolded at an unprecedented pace. Having completed high school by the age of eight, Simons went on to obtain both a bachelor’s and a master’s degree in physics in under two years. In late 2025, at just 15 years old, he formally defended his PhD in theoretical quantum physics at the University of Antwerp—through standard academic channels, under conventional supervision, and without honorary acceleration.

The credentials are verifiable. The thesis exists, the defense was public, and the institution is accredited. Yet Simons’ next move—venturing into medical science and artificial intelligence with the stated aim of “creating superhumans”—has placed him at the edge of some of the most contentious debates in modern science.

Simons’ doctoral dissertation, titled “Bose polarons in superfluids and supersolids,” examined the behavior of impurity particles within Bose–Einstein condensates—states of matter formed when atoms are cooled to near absolute zero, causing quantum effects to emerge on a macroscopic scale.

This area of condensed matter physics has implications for quantum simulation, low-temperature systems, and many-body interactions. According to documentation released by the University of Antwerp, Simons satisfied all academic and research requirements associated with the degree.

As part of his doctoral work, he also completed an internship at the Max Planck Institute for Quantum Optics, contributing to research on quasiparticle interactions in ultracold atomic environments. These institutions have not challenged the legitimacy of his academic record, and while the speed of his progress remains extraordinary, the process itself was conventional.

Immediately following his doctoral defense, Simons relocated to Munich to begin a second PhD program—this time in medical science, with a focus on artificial intelligence. This shift marks a departure from abstract quantum modeling into applied biological and computational research.

In a televised interview with Belgian broadcaster VTM, Simons articulated his long-term ambition in unusually direct terms. “After this, I’ll start working towards my goal: creating superhumans,” he stated.

Earlier reporting by The Brussels Times noted that Simons has discussed defeating aging since the age of 11, framing longevity as both a scientific and moral imperative. While details of his current research remain undisclosed, available information suggests that his work is concentrated on conceptual and computational models rather than laboratory-based biomedical experimentation. Areas of interest reportedly include AI-driven diagnostics, regenerative medicine frameworks, and lifespan modeling.

At this stage, there is no public evidence that Simons is involved in clinical trials or human-subject research.

Simons’ ambitions align with a rapidly expanding research landscape focused on human longevity and biological optimization. Well-funded private ventures such as Altos Labs and Calico Life Sciences are investigating cellular reprogramming, senolytics, and genetic pathways associated with aging and disease resistance.

At the academic level, journals such as Nature Aging and Cell Reports Medicine continue to publish work on machine-learning-based disease detection, gene expression analysis, and tissue regeneration. Yet much of this research remains exploratory, and the practical limits of “enhancement” remain undefined.

What distinguishes Simons is not merely his age, but the unusual bridge he is attempting to cross. Transitions from theoretical quantum physics into applied medical science are rare, particularly at the doctoral level, where disciplinary depth typically outweighs breadth.

The notion of engineering “superhumans” lacks scientific consensus and ethical clarity. According to the Stanford Encyclopedia of Philosophy, debates surrounding human enhancement revolve around whether interventions are therapeutic, elective, or fundamentally transformational.

At present, there is no indication that Simons’ research violates existing ethical frameworks. His academic affiliations have not publicly raised concerns, and his work appears to fall within early-stage theoretical exploration.

Nevertheless, the convergence of artificial intelligence, medicine, and long-term biological redesign presents governance challenges. Questions of supervision, peer review, and interdisciplinary oversight are still being negotiated across the field. The involvement of a researcher below the age of legal adulthood introduces further complexity.

For now, Laurent Simons represents neither a scientific revolution nor a regulatory failure. He is, instead, a data point at the frontier—where exceptional individual capability intersects with emerging technologies whose implications remain unresolved.

Whether his ambitions lead to meaningful breakthroughs or remain aspirational will depend not on speed, but on scrutiny, according to The Brussels Times.

Indian-American Students Develop Health Insurance Decision-Making Tool

Indian American students Sunveer Chugh and Dev Gupta have developed a digital tool, InsuraBridge, to assist consumers in making informed health insurance decisions.

Sunveer Chugh and Dev Gupta, two Indian American undergraduates at Case Western Reserve University in Cleveland, Ohio, have created a digital tool designed to help consumers navigate the complexities of health insurance purchasing on healthcare.gov.

The innovative tool, named InsuraBridge, aims to simplify the process of understanding critical aspects of health insurance plans, such as out-of-pocket maximums and in-network doctors, according to a university press release.

Chugh, a computer science major, and Gupta, who studies quantitative economics and healthcare management, recently showcased their startup at the Consumer Electronics Show (CES) in Las Vegas, one of the largest technology events in the world.

Gupta highlighted the challenge many consumers face, stating, “Millions of people buy insurance through healthcare exchanges, but there can be hundreds of plan options. Even for tech-savvy consumers, it’s nearly impossible to know which one is right for you.”

InsuraBridge employs advanced analytics to evaluate users’ preferences, including cost sensitivity, preferred doctors, and anticipated healthcare needs. The tool then provides tailored plan recommendations based on these assessments. This technology is built on a patented algorithm and utilizes an application programming interface (API) connected to healthcare exchanges.

“Think of it as a digital co-pilot for choosing insurance,” Chugh explained. “We want to give people clarity and confidence in a process that’s usually overwhelming.”

The duo presented their prototype at CES 2026’s University Innovations section, joining hundreds of emerging founders from around the globe.

Gupta emphasized their mission, saying, “Our goal is to make health insurance transparent, thus ensuring access, establishing care, and expanding medicine.” Chugh added, “If we can help people make better choices for their health and finances, that’s a win.”

Looking ahead, InsuraBridge is preparing to launch a new Medicaid application tool. This tool aims to streamline workflows by consolidating patient information and autocompleting applications in just minutes, significantly reducing the time typically required for the process.

Ray Herschman, an adjunct professor at the Weatherhead School of Management, and Mark Votruba, an associate professor at the same institution, have been instrumental in guiding the students throughout the development of their digital tool.

Herschman noted that InsuraBridge exemplifies the university’s commitment to innovation and social impact. “These students saw a problem that affects millions and used technology to fix it,” he said. “The InsuraBridge application connects to the Healthcare.gov website’s API to access key data that powers the healthcare exchange’s health plan options and associated benefit and provider network attributes, empowering consumers to make informed decisions.”

As the healthcare landscape continues to evolve, tools like InsuraBridge may play a crucial role in helping consumers navigate their options and make informed choices about their health insurance.

According to Case Western Reserve University, the development of such innovative solutions reflects a growing trend among students to address real-world challenges through technology.

Why January Is the Ideal Time to Remove Personal Data Online

January is a crucial month for online privacy, as scammers refresh their target lists, making it the ideal time to remove personal data from the internet.

As the new year begins, many people take the opportunity to reset their lives—setting new goals, organizing their spaces, and cleaning out their inboxes. However, it’s not just individuals who are hitting the reset button; scammers are doing the same, particularly when it comes to personal data.

January marks a significant period for online privacy, as data brokers refresh their profiles and scammers rebuild their target lists. This means that the longer your personal information remains online, the more comprehensive and valuable your profile becomes to those looking to exploit it.

To combat this growing threat, institutions such as the U.S. Department of the Treasury have issued advisories urging individuals to remain vigilant and take proactive measures against data-related scams. By acting early in the year, you can significantly reduce the likelihood of falling victim to scams, lower the risk of identity theft, and limit unwanted exposure throughout the year.

Many people mistakenly believe that outdated information becomes irrelevant over time. Unfortunately, this is not the case with data brokers. These entities do not merely store a static snapshot of who you are; they create dynamic profiles that evolve over time, incorporating new data points such as:

Each year adds another layer to your profile—a new address, a changed phone number, or even a family connection. While a single data point may seem insignificant, together they form a detailed identity profile that scammers can use to impersonate you convincingly. Therefore, delaying action only exacerbates the problem.

Scammers do not target individuals randomly; they work from organized lists. At the start of the year, these lists are refreshed, akin to a spring cleaning for criminals who are preparing to exploit identities for the next twelve months. Once your profile is flagged as responsive or profitable, it often remains in circulation.

Removing your data early is not just about preventing immediate scams; it is about disrupting the supply chain that fuels these criminal activities. When your information is eliminated from data broker databases, it has a compounding effect. The fewer lists you appear on in January, the less likely your data will be reused, resold, or recycled throughout the year. This is why it is essential to address data exposure proactively rather than reactively.

January is particularly critical for retirees and families, who are often more susceptible to fraud, scams, and other crimes. Scammers are aware of this and prioritize households with established financial histories early in the year.

Many individuals attempt to start fresh in January by taking various steps, such as:

While these actions are beneficial, they do not eliminate your data from broker databases. Credit monitoring services can alert you after a problem has occurred, password changes do not affect public profiles, and unsubscribing does not prevent data resale. If your personal information remains in numerous databases, scammers can easily locate you.

If you want to minimize scam attempts throughout the year, the most effective strategy is to remove your personal data at the source. You can achieve this in one of two ways: by submitting removal requests yourself or by employing a professional data removal service to handle the process for you.

Manually removing your data involves identifying dozens or even hundreds of data broker websites, locating their opt-out forms, and submitting removal requests one by one. This method requires verifying your identity, tracking responses, and repeating the process whenever your information resurfaces. While effective, it demands considerable time, organization, and ongoing follow-up.

On the other hand, a data removal service can manage this process on your behalf. These services typically:

Given the sensitive nature of personal information, it is crucial to select a data removal service that adheres to strict security standards and employs verified removal methods. While no service can guarantee complete removal of your data from the internet, utilizing a data removal service is a prudent choice. Although these services may come at a cost, they handle the work for you by actively monitoring and systematically erasing your personal information from numerous websites. This approach provides peace of mind and has proven to be the most effective way to safeguard your personal data.

By limiting the information available online, you reduce the risk of scammers cross-referencing data from breaches with information they may find on the dark web, making it more challenging for them to target you.

As January unfolds, it is essential to recognize that scammers do not wait for mistakes; they wait for exposed data. This month is when profiles are refreshed, lists are rebuilt, and targets are selected for the year ahead. The longer your personal information remains online, the more complete—and dangerous—your digital profile becomes.

The good news is that you can break this cycle. Removing your data now can reduce scam attempts, protect your identity, and lead to a quieter, safer year ahead. If you are going to make one privacy move this year, make it early—and make it count.

Have you ever been surprised by how much of your personal information was already online? Share your experiences with us at Cyberguy.com.

For more information on data removal services and to check if your personal information is already available online, visit Cyberguy.com.

According to CyberGuy.com, taking proactive steps in January can significantly enhance your online privacy and security.

Meta Partners with Three Companies for Nuclear Power Initiatives

Meta has entered into 20-year agreements to purchase power from three Vistra nuclear plants and collaborate on small modular reactor projects with two companies.

Meta announced on Friday that it has secured 20-year agreements to purchase power from three nuclear plants operated by Vistra Energy. The company also plans to collaborate with two firms focused on developing small modular reactors (SMRs).

According to Meta, the power purchase agreements will involve Vistra’s Perry and Davis-Besse plants in Ohio, as well as the Beaver Valley plant in Pennsylvania. These agreements are expected to facilitate financial support for the expansion of the Ohio facilities while extending their operational lifespan. The plants are currently licensed to operate until at least 2036, with one of the reactors at Beaver Valley licensed to run through 2047.

In addition to the power agreements, Meta will assist in the development of small modular reactors being planned by Oklo and TerraPower. Proponents of SMRs argue that these reactors could ultimately reduce costs, as they can be manufactured in factories rather than constructed on-site. However, some industry experts remain skeptical about whether SMRs can achieve the same economies of scale as traditional large reactors. Currently, there are no commercial SMRs operating in the United States, and the proposed plants will require regulatory permits before construction can begin.

Joel Kaplan, Meta’s chief global affairs officer, emphasized the significance of these agreements, stating that they, along with a previous agreement with Constellation to maintain an Illinois reactor’s operation for another 20 years, position Meta as one of the largest corporate purchasers of nuclear energy in U.S. history.

Meta’s agreements are projected to provide up to 6.6 gigawatts of nuclear power by 2035. The company will also help fund the development of two reactors by TerraPower, which are expected to generate up to 690 megawatts of power as early as 2032. This partnership grants Meta rights to energy from up to six additional TerraPower reactors by 2035. Chris Levesque, President and CEO of TerraPower, noted that this agreement will facilitate the rapid deployment of new reactors.

The trend of tech companies investing in nuclear energy has been gaining momentum. Last October, both Amazon and Google announced plans to invest in the development of small nuclear reactors, a technology that is still in its nascent stages. These initiatives aim to address the high costs and lengthy construction timelines that have historically hindered new reactor projects in the U.S.

Meta, along with other major tech firms such as Amazon and Google, has signed the Large Energy Consumers Pledge, committing to help triple the nation’s nuclear energy output by 2050. As these companies expand their artificial intelligence centers, they are becoming significant contributors to the increasing energy demands in the United States. Other notable organizations, including Occidental and IHI Corp, have also joined this initiative, indicating widespread corporate support for the nation’s nuclear energy goals.

As the energy landscape continues to evolve, Meta’s strategic investments in nuclear power reflect a growing recognition of the role that nuclear energy can play in meeting future energy needs.

According to The American Bazaar, these developments highlight a broader trend among tech companies to embrace nuclear energy as a sustainable solution to rising energy demands.

Health Tech Innovations Highlighted at CES 2026

Innovations showcased at CES 2026 are transforming health technology, featuring AI-driven devices aimed at enhancing wellness, mobility, and safety.

The Consumer Electronics Show (CES) 2026 is currently taking place in Las Vegas, showcasing the latest advancements in consumer technology. This annual event, which spans four days every January, attracts tech companies, startups, researchers, investors, and journalists from around the globe. CES serves as a preview for products that could soon find their way into homes, hospitals, gyms, and workplaces.

This year, while flashy gadgets and robots capture attention, health technology is at the forefront, with a focus on prevention, recovery, mobility, and long-term well-being. Here are some standout health tech products that have garnered significant interest at CES 2026.

NuraLogix has introduced a groundbreaking smart mirror that transforms a brief selfie video into a comprehensive overview of an individual’s long-term health. The Longevity Mirror uses artificial intelligence to analyze subtle blood flow patterns in the user’s face, providing scores for metabolic health, heart health, and physiological age on a scale from zero to 100. Results are delivered in approximately 30 seconds, accompanied by clear explanations and recommendations. The AI system has been trained on hundreds of thousands of patient records, allowing it to convert raw data into understandable insights. The mirror supports up to six user profiles and is set to launch in early 2026 for $899, which includes a one-year subscription. Subsequent annual subscriptions will cost $99, with optional concierge support available to connect users with nutrition and wellness experts.

Ascentiz showcased its H1 Pro walking exoskeleton, which emphasizes real-world mobility applications. This lightweight, modular device is designed to reduce strain while providing motor-assisted movement over longer distances. The system employs AI to adapt assistance based on the user’s motion and terrain, making it effective on inclines and uneven surfaces. Its compact design features a belt-based attachment system, and its dust- and water-resistant construction allows for outdoor use in various conditions. Ascentiz also offers more powerful models, including Ultra and knee or hip-attached versions, demonstrating the shift of exoskeletons from clinical rehabilitation to everyday mobility support.

Cosmo Robotics received a CES Innovation Award for its Bambini Kids exoskeleton, the first overground pediatric exoskeleton with powered ankle motion. Designed for children aged 2.5 to 7 with congenital or acquired neurological disorders, this system offers both active and passive gait training modes. By encouraging guided and natural movement, it helps children relearn walking skills while minimizing complications associated with conditions like cerebral palsy.

For those who spend significant time indoors, the Sunbooster device offers a practical solution for replacing the benefits of natural sunlight. This innovative product clips onto a monitor, laptop, or tablet, projecting near-infrared light while users work, without causing noise or disruption. Near-infrared light, a natural component of sunlight, is associated with improved energy levels, mood, and skin health. Sunbooster utilizes patented SunLED technology to deliver controlled exposure and tracks daily dosage, encouraging two to four hours of use during screen time. The technology has been validated through human and laboratory studies conducted at the University of Groningen and Maastricht University, providing scientific support for its claims. The company is also developing a phone case and a monitor with built-in near-infrared lighting to further enhance indoor sunlight replacement.

Allergen Alert addresses the challenges of dining out with food allergies. This handheld device tests small food samples inside a sealed, single-use pouch, detecting allergens or gluten in meals within minutes. Built on laboratory-grade technology derived from bioMérieux expertise, the system automates the analytical process, delivering results without requiring technical knowledge. Allergen Alert aims to restore confidence and inclusion at the dining table, with plans for pre-orders at the end of 2026 and future expansions to test additional common allergens.

Samsung previewed its Brain Health feature for Galaxy wearables, a research-driven tool that analyzes walking patterns, voice changes, and sleep data to identify potential early signs of cognitive decline. This system leverages data from devices like the Galaxy Watch and Galaxy Ring to establish a personal baseline, monitoring for subtle deviations linked to early dementia. Samsung emphasizes that Brain Health is not intended to diagnose medical conditions but rather to provide early warnings that encourage users and their families to seek professional evaluations sooner. While a public release date has not been confirmed, CES 2026 attendees can experience an in-person demo of the feature.

Withings is redefining the capabilities of bathroom scales with its BodyScan 2, which has earned a CES 2026 Innovation Award. In less than 90 seconds, this smart scale measures ECG data, arterial stiffness, metabolic efficiency, and hypertension risk. The connected app allows users to observe how factors like stress, sedentary habits, menopause, or weight changes impact their cardiometabolic health, shifting the focus from weight alone to early health indicators that can be tracked over time.

Garmin received a CES Innovation Honoree Award for its Venu 4 smartwatch, which features a new health status indicator that highlights when metrics such as heart rate variability and respiration deviate from personal baselines. The watch also includes lifestyle logging, linking daily habits to sleep and stress outcomes, and boasts up to 12 days of battery life for continuous tracking without nightly charging.

Ring introduced Fire Watch, an opt-in feature that utilizes AI to detect smoke and flames from compatible cameras. During wildfires, users can share snapshots with Watch Duty, a nonprofit organization that distributes real-time fire alerts to communities and authorities, demonstrating how existing home technology can enhance public safety during environmental emergencies.

Finally, the RheoFit A1 may be the most relaxing health gadget at CES 2026. This AI-powered robotic roller glides beneath the user’s body to deliver a full-body massage in about 10 minutes. With interchangeable massage attachments and activity-specific programs, it targets soreness from workouts or long hours spent at a desk. The companion app employs an AI body scan to automatically adjust pressure and focus areas.

CES 2026 highlights the evolution of health technology, making it more practical and personal. Many showcased products prioritize early problem detection, stress reduction, and informed health decision-making. As technology becomes increasingly integrated into daily life, these innovations promise to enhance safety and well-being.

Which of these health tech products from CES 2026 would you find most useful in your daily life? Share your thoughts with us at Cyberguy.com.

According to CyberGuy.com.

AI Workplace Competition: Analyzing Claude, Gemini, ChatGPT, and Others

Recent survey findings reveal that Anthropic’s Claude is the most popular AI tool among U.S. professionals, surpassing competitors like ChatGPT and Google’s Gemini.

In the rapidly evolving landscape of artificial intelligence, a new survey sheds light on the preferences of U.S. professionals regarding workplace AI tools. While major tech companies are eager to promote their proprietary AI solutions, it appears that users are making their choices based on performance rather than corporate allegiance.

Conducted by Blind, an anonymous professional community platform, the survey indicates that Claude, developed by Anthropic, has emerged as the most widely used AI model in corporate environments. Surprisingly, Claude has outperformed more established competitors, including ChatGPT and Google’s Gemini. According to the survey, 31.7% of respondents reported using Claude as their primary AI tool at work, regardless of their employer’s preferences.

The survey collected responses from verified U.S.-based professionals during December, with a significant number identifying as software engineers. Participants sought AI assistance across various tasks, including debugging, system design, documentation, and content generation.

Despite Claude’s leading position, the survey reveals a more complex reality: professionals are not committing to a single AI model. Instead, many are curating personalized toolkits tailored to their specific needs. Vasudha Badri Paul, founder of Avatara AI, shared her experience, stating that her daily workflow involves multiple platforms. “I use Perplexity and Notebook LLM most frequently. For research and learning, I go to Claude and Gemini, while ChatGPT is my go-to for content,” she explained. Paul also incorporates Notion AI for organization, Sora for short video generation, Canva Magic Studio for graphics, and Gamma for slide decks.

This trend reflects a pragmatic approach among users, who are increasingly willing to switch between tools rather than remain loyal to a single ecosystem.

When it comes to coding, Claude’s advantages become particularly pronounced. The survey indicates that among developers, Claude excels in software development tasks. Many respondents highlighted its capabilities in writing and understanding complex code, an area where company-backed tools often face resistance. The survey found that 19.6% of professionals use ChatGPT, while 15% rely on Gemini. GitHub Copilot is close behind with 14.2%, and another 11.5% reported using Cursor.

The survey also explored preferences within companies that have their own AI products. At Meta, for instance, 50.7% of surveyed employees indicated that Claude was their preferred AI model, while only 8.2% reported using Meta AI. A similar trend was observed among Microsoft employees, where 34.8% favored Claude, narrowly ahead of Copilot at 32.2%, with ChatGPT trailing at 18.3%.

One key takeaway from the survey is that corporate backing does not necessarily guarantee employee loyalty. In an era where productivity is increasingly driven by AI tools, professionals are prioritizing effectiveness over brand allegiance.

Nitin Kumar, an app developer and solutions manager, noted the shift in his own AI stack over the past year. He stated, “Claude is definitely the most superior for software development.” Kumar recently canceled his ChatGPT Plus subscription, citing a lack of utility. However, he acknowledged that the AI landscape is still evolving, adding, “Gemini 3 Pro changed the game completely for non-coding uses.” He believes that coding capabilities are now nearly on par with Claude Opus 4.5.

Kumar’s insights reflect a broader trend of users experimenting with different tools and comparing version upgrades to find the best fit for their needs.

Interestingly, Google employees showed the strongest internal alignment, with 57.6% of those surveyed using Gemini as their primary AI model. However, this preference did not extend beyond Google’s offices, as only 11.6% of Amazon employees selected Gemini as their top choice. Amazon’s own AI tools, such as Amazon CodeWhisperer, received minimal traction, with just 0.7% of respondents indicating they used it.

Ultimately, the survey highlights a significant shift in how professionals engage with AI. Rather than adopting tools based on corporate mandates or branding, workers are choosing solutions that demonstrably enhance their speed, accuracy, and overall output. While Claude currently leads the pack, its dominance may not be permanent, but it has certainly established a measure of trust among users for now.

According to Blind, the findings underscore the importance of user experience in the competitive AI landscape.

Ex-Amazon Executives Secure $15 Million for Spangle AI Startup

Spangle AI, a startup founded by former Amazon executives, has secured $15 million in Series A funding to enhance real-time, personalized shopping experiences for online retailers.

Spangle AI, a Seattle-based startup focused on revolutionizing online retail, has successfully raised $15 million in a Series A funding round. The investment was led by NewRoad Capital Partners, with participation from Madrona, DNX Ventures, Streamlined Ventures, and several angel investors. Following this funding, Spangle AI is now valued at $100 million.

Founded in 2022 by a team of former Amazon executives, Spangle AI aims to create customized shopping experiences in real-time. The platform can generate tailored storefronts for individual customers by analyzing traffic from various sources, including social media, AI search tools, and autonomous shopping agents.

Spangle AI is addressing a significant shift in e-commerce, moving away from traditional methods that cater primarily to customers visiting a brand’s website directly. “The problem is that websites are not designed to continue a journey that originated somewhere else,” said Spangle CEO Maju Kuruvilla, who previously served as a vice president at Amazon, where he was involved in Prime logistics and fulfillment.

Fei Wang, Spangle’s CTO and a former Principal Engineer at Amazon, emphasized the limitations of existing e-commerce systems. “Having built unified AI systems at Amazon, including Alexa and customer service workflow automation at massive scale, we saw what’s broken in traditional e-commerce stacks: fragmented data, slow feedback cycles, and no intelligence layer tying it together,” Wang explained.

Unlike conventional approaches that rely heavily on user identity or historical data, Spangle’s system focuses on understanding customer intent and engagement. It is trained on a retailer’s catalog, brand guidelines, and performance metrics, allowing for a more contextual shopping experience.

Spangle AI’s innovative approach has attracted the attention of major fashion and retail brands, including EVOLVE, Steve Madden, and Alexander Wang. These partnerships have reportedly resulted in conversion rate increases of up to 50% and significant improvements in return on ad spend. In its first nine months, Spangle AI has secured nine enterprise customers, although the company has not disclosed specific revenue figures.

Kuruvilla noted that while e-commerce retailers excel at attracting customer interest, the challenge lies in converting that interest into sales. “Conversion from all this traffic that’s discovered outside is a huge problem for all these brands,” he stated.

Prior to founding Spangle AI, Kuruvilla was the CEO and CTO at Bolt, a controversial one-click checkout e-commerce startup that achieved a valuation of $11 billion. His extensive background also includes roles at Microsoft, Honeywell, and Milliman.

Fei Wang, who co-founded Spangle AI, previously served as CTO at Saks OFF 5TH, a subsidiary of Saks Fifth Avenue. He spent nearly 12 years at Amazon as an engineer. Yufeng Gou, the head of engineering at Spangle, also has a background at Saks OFF 5TH. Karen Moon, the company’s COO, is a seasoned investor and former CEO at Trendalytics.

As the e-commerce landscape continues to evolve, Spangle AI is positioning itself at the forefront of agentic commerce, leveraging its founders’ extensive experience to create a more seamless and personalized shopping experience for consumers.

The information in this article is based on reports from The American Bazaar.

Plastic Bottles May One Day Power Your Electronic Devices

Researchers have developed a method to transform discarded plastic bottles into supercapacitors, potentially powering electric vehicles and electronics within the next decade.

Every year, billions of single-use plastic bottles contribute to the growing waste crisis, ending up in landfills and oceans. However, a recent scientific breakthrough suggests that these discarded bottles could play a role in powering our daily lives.

Researchers have successfully created high-performance energy storage devices known as supercapacitors from waste polyethylene terephthalate (PET) plastic, commonly found in beverage containers. This innovative research, published in the journal Energy & Fuels and highlighted by the American Chemical Society, aims to reduce plastic pollution while advancing cleaner energy technologies.

According to the researchers, over 500 billion single-use PET plastic bottles are produced globally each year, with most being used once and then discarded. Lead researcher Dr. Yun Hang Hu emphasizes that this scale of production presents a significant environmental challenge. Instead of allowing this plastic to accumulate, the research team focused on upcycling it into valuable materials that can support renewable energy systems and reduce production costs.

Supercapacitors are devices that can charge quickly and deliver power instantly, making them ideal for applications in electric vehicles, solar power systems, and everyday electronics. Dr. Hu’s team discovered a method to manufacture these energy storage components using discarded PET plastic bottles. By reshaping the plastic at extremely high temperatures, they transformed waste into materials capable of generating electricity efficiently and repeatedly.

The process begins with cutting the PET bottles into tiny, grain-sized pieces. These pieces are then mixed with calcium hydroxide and heated to nearly 1,300 degrees Fahrenheit in a vacuum. This intense heat converts the plastic into a porous, electrically conductive carbon powder. The researchers then form this powder into thin electrode layers.

For the separator, small pieces of PET are flattened and perforated with hot needles to create a pattern that allows electric current to pass through efficiently while ensuring safety and durability. Once assembled, the supercapacitor consists of two carbon electrodes separated by the PET film and submerged in a potassium hydroxide electrolyte.

In testing, the all-waste-plastic supercapacitor outperformed similar devices made with traditional glass fiber separators. After repeated charging and discharging cycles, it retained 79 percent of its energy capacity, compared to 78 percent for a comparable glass fiber device. This slight advantage is significant; the PET-based design is cheaper to produce, fully recyclable, and supports circular energy storage technologies that reuse waste materials instead of discarding them.

This breakthrough could have a more immediate impact on everyday life than one might expect. The development of cheaper supercapacitors could lower the costs associated with electric vehicles, solar systems, and portable electronics. Faster charging times and longer lifespans for devices may soon follow. Furthermore, this research illustrates that sustainability does not necessitate sacrifices; waste plastics can become part of the solution rather than remaining a persistent problem.

While this technology is still under development, the research team is optimistic that PET-based supercapacitors could reach commercial markets within the next five to ten years. In the meantime, opting for reusable bottles and plastic-free alternatives remains a practical way to help reduce waste today.

Transforming waste into energy storage is not just an innovative idea; it demonstrates how science can address two pressing global challenges simultaneously. As plastic pollution continues to escalate, so does the demand for energy. This research shows that these issues do not need to be tackled in isolation. By reimagining waste as a resource, scientists are paving the way for a cleaner and more efficient future using materials we currently discard.

If your empty water bottle could one day help power your home or vehicle, would you still view it as trash? Let us know your thoughts by reaching out to us.

According to Fox News, this research highlights the potential of upcycling waste materials to create sustainable energy solutions.

Earth Prepares to Say Goodbye to ‘Mini Moon’ Asteroid Until 2055

Earth is set to bid farewell to a “mini moon” asteroid that has been in close proximity for the past two months, with plans for a return visit in 2055.

Earth is parting ways with an asteroid that has been accompanying it as a “mini moon” for the last two months. This harmless space rock is expected to drift away on Monday, influenced by the stronger gravitational pull of the sun.

However, the asteroid, designated 2024 PT5, will make a brief return visit in January. NASA plans to utilize a radar antenna to observe the 33-foot asteroid during this time, which will enhance scientists’ understanding of the object. It is believed that 2024 PT5 may be a boulder that was ejected from the moon due to an impact from a larger asteroid.

While NASA clarifies that this asteroid is not technically a moon—having never been fully captured by Earth’s gravity—it is still considered “an interesting object” worthy of scientific study. The asteroid was identified by astrophysicist brothers Raul and Carlos de la Fuente Marcos from Complutense University of Madrid, who have conducted hundreds of observations in collaboration with telescopes located in the Canary Islands.

Currently, the asteroid is more than 2 million miles away from Earth, making it too small and faint to be observed without a powerful telescope. In January, it will pass within approximately 1.1 million miles of Earth, maintaining a safe distance before continuing its journey deeper into the solar system. The asteroid is not expected to return until 2055, at which point it will be nearly five times farther away than the moon.

First detected in August, 2024 PT5 began its semi-orbital path around Earth in late September after being influenced by Earth’s gravity, following a horseshoe-shaped trajectory. By the time of its return next year, the asteroid will be traveling at more than double its speed from September, making it too fast to linger, according to Raul de la Fuente Marcos.

NASA plans to track the asteroid for over a week in January using the Goldstone solar system radar antenna located in California’s Mojave Desert, which is part of the Deep Space Network. Current data indicates that during its 2055 visit, this sun-orbiting asteroid will once again make a temporary and partial lap around Earth.

According to NASA, the study of such asteroids can provide valuable insights into the history and composition of celestial bodies in our solar system.

Musk’s Grok AI Chatbot Raises Concerns Over Inappropriate Images

Elon Musk’s AI chatbot Grok faces global backlash as concerns rise over the generation of sexualized images of women and children without consent, prompting investigations and demands for regulatory action.

Elon Musk’s artificial intelligence chatbot Grok is currently under intense scrutiny from governments around the world. Authorities in Europe, Asia, and Latin America have raised serious concerns regarding the creation and circulation of sexualized images of women and children generated without consent.

This backlash follows a troubling increase in explicit content linked to Grok Imagine, an AI-powered image generation feature integrated into Musk’s social media platform, X. Regulators are warning that the tool’s capacity to digitally alter real images using text prompts has exposed significant gaps in AI governance, which could lead to potentially irreversible harm, particularly affecting women and minors.

Countries including the United Kingdom, the European Union, France, India, Poland, Malaysia, and Brazil have either demanded immediate corrective action, initiated investigations, or threatened regulatory penalties. This situation signals what could become one of the most significant international confrontations regarding the misuse of generative AI to date.

Grok Imagine was launched last year, allowing users to create or modify images and videos through simple text commands. The tool features a “spicy mode” designed to permit adult content. While marketed as an edgy alternative to more restricted AI systems, critics argue that this positioning has encouraged misuse.

The controversy escalated recently when Grok reportedly began approving a large volume of user requests to alter images of individuals posted by others on X. Users could generate sexualized depictions by instructing the chatbot to digitally remove or modify clothing. Since Grok’s generated images are publicly displayed on the platform, altered content spread rapidly.

A recent analysis by digital watchdog AI Forensics reviewed 20,000 images generated over a one-week period and found that approximately 2% appeared to depict individuals who looked under 18. Many images showed young or very young-looking girls in bikinis or transparent clothing, raising urgent concerns about AI-enabled sexual exploitation.

Experts warn that such nudification tools blur the line between consensual creativity and non-consensual abuse, making regulation particularly challenging once content goes viral.

In response to media inquiries, Musk’s AI company, xAI, issued an automated message stating, “Legacy Media Lies.” While the company did not deny the existence of problematic Grok content, X maintained that it enforces rules against illegal material.

On its Safety account, the platform stated that it removes unlawful content, permanently suspends accounts, and cooperates with law enforcement when necessary. Musk echoed this sentiment, asserting, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

However, critics argue that enforcement after harm occurs does little to protect victims, especially when AI tools enable rapid and repeated abuse.

In the United Kingdom, Technology Secretary Liz Kendall described the content linked to Grok as “absolutely appalling” and demanded urgent intervention by X. “We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls,” Kendall stated.

The UK communications regulator Ofcom confirmed it has made urgent contact with both X and xAI to assess compliance with the Online Safety Act, which mandates platforms to prevent and remove child sexual abuse material once identified.

The European Commission has also taken a firm stance on the issue. Commission spokesman Thomas Regnier stated that officials are fully aware of Grok being used to generate explicit sexual content, including imagery resembling children. “This is not spicy. This is illegal. This is appalling. This is disgusting, and it has no place in Europe,” Regnier asserted.

EU officials noted that Grok had previously drawn attention for generating Holocaust-denial content, further raising concerns about the platform’s safeguards and oversight mechanisms.

In France, prosecutors have expanded an ongoing investigation into X to include sexually explicit AI-generated deepfakes. This move follows complaints from lawmakers and alerts from multiple government ministers. French authorities emphasized that crimes committed online carry the same legal consequences as those committed offline, stressing that AI does not exempt platforms or users from accountability.

India’s Ministry of Electronics and Information Technology issued a 72-hour ultimatum demanding that X remove all unlawful content and submit a detailed report on Grok’s governance and safety framework. The ministry accused the platform of enabling the “gross misuse” of artificial intelligence by allowing the creation of obscene and derogatory images of women. It warned that failure to comply could result in serious legal consequences, and the deadline has since passed without a public response.

In Poland, parliamentary speaker Włodzimierz Czarzasty cited Grok while advocating for stronger digital safety legislation to protect minors, describing the AI’s behavior as “undressing people digitally.”

Malaysia’s communications regulator confirmed investigations into users who violate laws against obscene content and stated it would summon representatives from X. In Brazil, federal lawmaker Erika Hilton filed complaints with prosecutors and the national data protection authority, calling for Grok’s AI image functions to be suspended during investigations. “The right to one’s image is individual,” Hilton stated. “It cannot be overridden by platform terms of use, and the mass distribution of sexualized images of women and children crosses all ethical and legal boundaries.”

The Grok controversy has reignited a global debate over the extent to which AI companies should be allowed to push boundaries in the name of innovation. Regulators argue that without strict safeguards, generative AI risks normalizing digital abuse on an unprecedented scale.

As governments consider fines, restrictions, and even feature bans, the outcome of this situation may set a lasting precedent for how AI systems are regulated worldwide, as well as how societies balance technological freedom with human dignity, according to Global Net News.

Interstellar Voyager 1 Resumes Operations After Communication Pause with NASA

Nasa’s Voyager 1 has resumed operations and communications after a temporary switch to a lower-power mode, allowing the spacecraft to continue its mission in interstellar space.

NASA has confirmed that Voyager 1 has regained its communication capabilities and resumed regular operations following a brief pause in late October. The spacecraft, which is currently located approximately 15.4 billion miles from Earth, experienced an unexpected shutdown of its primary radio transmitter, known as the X-band. In its place, Voyager 1 switched to its much weaker S-band transmitter, a mode that had not been utilized in over 40 years.

The communication link between NASA and Voyager 1 has been inconsistent, particularly during the period when the spacecraft was operating on the lower-band S-band. This switch hindered the Voyager mission team’s ability to download crucial science data and assess the spacecraft’s status.

Earlier this month, NASA engineers successfully reactivated the X-band transmitter, allowing for the collection of data from the four operational science instruments onboard Voyager 1. With communications restored, engineers are now focused on completing a few remaining tasks to return Voyager 1 to its pre-issue operational state. One of these tasks involves resetting the system that synchronizes the spacecraft’s three onboard computers.

The activation of the S-band was a result of Voyager 1’s fault protection system, which was triggered when engineers turned on a heater on the spacecraft. The system determined that the probe did not have sufficient power and automatically disabled nonessential systems to conserve energy for critical operations.

In this process, the fault protection system turned off all nonessential systems, including the X-band, and activated the S-band to ensure continued communication with Earth. Notably, Voyager 1 had not used the S-band for communication since 1981.

Voyager 1’s journey began in 1977, when it was launched alongside its twin, Voyager 2, on a mission to explore the gas giant planets of the solar system. The spacecraft has transmitted stunning images of Jupiter’s Great Red Spot and Saturn’s iconic rings. Voyager 2 continued its journey to Uranus and Neptune, while Voyager 1 utilized Saturn’s gravity to propel itself past Pluto.

Each Voyager spacecraft is equipped with ten science instruments, and currently, four of these instruments are operational on Voyager 1, allowing scientists to study the particles, plasma, and magnetic fields present in interstellar space.

According to NASA, the successful reestablishment of communication with Voyager 1 marks a significant milestone in the ongoing mission of this historic spacecraft.

Malicious Chrome Extensions Discovered Stealing Sensitive User Data

Two malicious Chrome extensions, “Phantom Shuttle,” were found stealing sensitive user data for years before being removed from the Chrome Web Store, raising concerns about online security.

Security researchers have recently exposed two Chrome extensions, known as “Phantom Shuttle,” that have been stealing user data for years. These extensions, which were designed to appear as harmless proxy tools, were found to be hijacking internet traffic and compromising sensitive information from unsuspecting users. Alarmingly, both extensions were available on Chrome’s official extension marketplace.

According to researchers at Socket, the extensions have been active since at least 2017. They were marketed towards foreign trade workers needing to test internet connectivity from various regions and were sold as subscription-based services, with prices ranging from approximately $1.40 to $13.60. At first glance, the extensions seemed legitimate, with descriptions that matched their purported functionality and reasonable pricing.

However, the reality was far more concerning. After installation, the Phantom Shuttle extensions routed all user web traffic through proxy servers controlled by the attackers. These proxies utilized hardcoded credentials embedded directly into the extension’s code, making detection difficult. The malicious logic was concealed within what appeared to be a legitimate jQuery library, further complicating efforts to identify the threat.

The attackers employed a custom character-index encoding scheme to obscure the credentials, ensuring they were not easily accessible. Once activated, the extensions monitored web traffic and intercepted HTTP authentication challenges on any site visited by the user. To maintain control over the traffic flow, the extensions dynamically reconfigured Chrome’s proxy settings using an auto-configuration script, effectively forcing the browser to route requests through the attackers’ infrastructure.

In its default “smarty” mode, Phantom Shuttle routed traffic from over 170 high-value domains, including developer platforms, cloud service dashboards, social media sites, and adult content portals. Notably, local networks and the attackers’ command-and-control domain were excluded, likely to avoid raising suspicion or disrupting their operations.

While functioning as a man-in-the-middle, the extensions were capable of capturing any data submitted through web forms. This included usernames, passwords, credit card details, personal information, session cookies from HTTP headers, and API tokens extracted from network requests. The potential for data theft was significant, raising serious concerns about user privacy and security.

Following the revelations, CyberGuy reached out to Google, which confirmed that both extensions had been removed from the Chrome Web Store. This incident underscores the importance of vigilance when it comes to browser extensions, as they can significantly increase the attack surface for cyber threats.

To mitigate risks associated with browser extensions, users are advised to regularly review the extensions installed on their devices. It is essential to scrutinize any extension that requests extensive permissions, particularly those related to proxy tools, VPNs, or network functionalities. If an extension seems suspicious, users should disable it immediately to prevent any potential data breaches.

Additionally, employing strong antivirus software can provide an extra layer of protection against suspicious network activity and unauthorized changes to browser settings. This software can alert users to potential threats, including phishing emails and ransomware scams, helping to safeguard personal information and digital assets.

Ultimately, the Phantom Shuttle incident serves as a reminder of the dangers posed by malicious extensions that masquerade as legitimate tools. Users must remain vigilant and proactive in managing their browser extensions to protect their online privacy and security. As the landscape of cyber threats continues to evolve, staying informed and cautious is crucial.

For further information on cybersecurity and best practices, visit CyberGuy.com.

OpenAI Acknowledges AI Browsers Vulnerable to Unsolvable Prompt Attacks

OpenAI acknowledges that prompt injection attacks pose a long-term security risk for AI-powered browsers, highlighting the challenges of safeguarding these technologies in an evolving cyber landscape.

OpenAI has developed an automated attacker system to assess the security of its ChatGPT Atlas browser against prompt injection threats and other cybercriminal risks. This initiative underscores the growing recognition that cybercriminals can exploit vulnerabilities without relying on traditional malware or exploits; sometimes, all they need are the right words.

In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to be fully eradicated. These attacks involve embedding malicious instructions within web pages, documents, or emails in ways that are not easily detectable by humans but can be recognized by AI agents. Once the AI processes this content, it may be misled into executing harmful commands.

OpenAI likened this issue to scams and social engineering, noting that while it is possible to reduce the frequency of such attacks, complete elimination is improbable. The company also pointed out that the “agent mode” feature in its ChatGPT Atlas browser increases the potential risk, as it broadens the attack surface. The more capabilities an AI has to act on behalf of users, the greater the potential for damage if something goes awry.

Since the launch of the ChatGPT Atlas browser in October, security researchers have been quick to explore its vulnerabilities. Within hours of its release, demonstrations emerged showing how a few strategically placed words in a Google Doc could alter the browser’s behavior. On the same day, Brave issued a warning, stating that indirect prompt injection represents a fundamental issue for AI-powered browsers, including those developed by other companies like Perplexity.

This challenge is not confined to OpenAI alone. Earlier this month, the National Cyber Security Centre in the U.K. cautioned that prompt injection attacks against generative AI systems may never be fully mitigated. OpenAI views prompt injection as a long-term security challenge that necessitates ongoing vigilance rather than a one-time solution. Their strategy includes quicker patch cycles, continuous testing, and layered defenses, aligning with approaches taken by competitors such as Anthropic and Google, who advocate for architectural controls and persistent stress testing.

OpenAI’s approach includes the development of what it calls an “LLM-based automated attacker.” This AI-driven system is designed to simulate a hacker’s behavior, using reinforcement learning to identify ways to insert malicious instructions into an AI agent’s workflow. The bot conducts simulated attacks, predicting how the target AI would reason and where it might fail, allowing it to refine its tactics based on feedback. OpenAI believes this method can reveal weaknesses more rapidly than traditional attackers might.

Despite these defensive measures, AI browsers remain vulnerable. They combine two elements that attackers find appealing: autonomy and access. Unlike standard browsers, AI browsers do not merely display information; they can read emails, scan documents, click links, and take actions on behalf of users. This means that a single malicious prompt hidden within a webpage or document can influence the AI’s actions without the user’s awareness. Even with safeguards in place, these agents operate on a foundation of trust in the content they process, which can be exploited.

While it may not be possible to completely eliminate prompt injection attacks, users can take steps to mitigate their impact. It is advisable to limit an AI browser’s access to only what is necessary. Avoid linking primary email accounts, cloud storage, or payment methods unless absolutely required. The more data an AI can access, the more attractive it becomes to potential attackers, and reducing access can minimize the potential fallout if an attack occurs.

Users should also refrain from allowing AI browsers to send emails, make purchases, or modify account settings without explicit confirmation. This additional layer of verification can interrupt long attack chains and provide an opportunity to detect suspicious behavior. Many prompt injection attacks rely on the AI acting silently in the background without user oversight.

Utilizing a password manager is another effective strategy to ensure that each account has a unique and robust password. If an AI browser or a malicious webpage compromises one credential, attackers will be unable to exploit it elsewhere. Many password managers also have features that prevent autofill on unfamiliar or suspicious sites, alerting users to potential threats before they enter any information.

Additionally, users should check if their email addresses have been exposed in previous data breaches. A reliable password manager often includes a breach scanner that can identify whether email addresses or passwords have appeared in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Even if an attack originates within the browser, antivirus software can still detect suspicious scripts, unauthorized system changes, or malicious network activity. Effective antivirus solutions focus on behavior rather than just files, which is essential for addressing AI-driven or script-based attacks. Strong antivirus protection can also alert users to phishing emails and ransomware scams, safeguarding personal information and digital assets.

When instructing an AI browser, it is important to be specific about its permissions. General commands like “handle whatever is needed” can give attackers the opportunity to manipulate the AI through hidden prompts. Narrowing instructions makes it more challenging for malicious content to influence the agent.

As AI browsers continue to evolve, security fixes must keep pace with emerging attack techniques. Delaying updates can leave known vulnerabilities exposed for longer than necessary. Enabling automatic updates ensures that users receive protection as soon as it becomes available, even if they miss the announcement.

The rapid rise of AI browsers has led to offerings from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Existing browsers like Chrome and Edge are also integrating AI and agentic features into their platforms. While these technologies hold promise, they are still in their infancy, and users should be cautious about the hype surrounding them.

As AI browsers become more prevalent, the question remains: Are they worth the risk, or are they advancing faster than security measures can keep up? Users are encouraged to share their thoughts on this topic at Cyberguy.com.

NASA Finalizes Strategy for Sustaining Human Presence in Space

NASA has finalized its strategy for maintaining a human presence in space, focusing on the transition from the International Space Station to future commercial platforms.

NASA has finalized its strategy for sustaining a human presence in space, looking ahead to the planned de-orbiting of the International Space Station (ISS) in 2030. The agency’s new document emphasizes the importance of maintaining the capability for extended stays in orbit after the ISS is retired.

“NASA’s Low Earth Orbit Microgravity Strategy will guide the agency toward the next generation of continuous human presence in orbit, enable greater economic growth, and maintain international partnerships,” the document states. This commitment comes amid concerns about whether new space stations will be ready in time, especially with the incoming administration’s efforts to cut spending through the Department of Government Efficiency, raising fears of potential budget cuts for NASA.

NASA Deputy Administrator Pam Melroy acknowledged the tough decisions that have been made in recent years due to budget constraints. “Just like everybody has to make hard decisions when the budget is tight, we’ve made some choices over the last year to cut back programs or cancel them altogether to ensure that we’re focused on our highest priorities,” she said.

Commercial space company Voyager is actively working on one of the space stations that could replace the ISS when it de-orbits in 2030. Jeffrey Manber, Voyager’s president of international and space stations, expressed support for NASA’s strategy, emphasizing the need for a clear commitment from the United States. “We need that commitment because we have our investors saying, ‘Is the United States committed?’” he stated.

The push for a sustained human presence in space dates back to President Reagan, who first launched the initiative for a permanent human residence in space. He also highlighted the importance of private partnerships, stating, “America has always been greatest when we dared to be great. We can reach for greatness.” Reagan’s vision included the belief that the market for space transportation could surpass the nation’s capacity to develop it.

The ISS has been a cornerstone of human spaceflight since the first module was launched in 1998. Over the past 24 years, it has hosted more than 28 astronauts from 23 countries, maintaining continuous human occupation.

The Trump administration’s national space policy, released in 2020, called for a “continuous human presence in Earth orbit” and emphasized the need to transition to commercial platforms. The Biden administration has continued this policy direction.

NASA Administrator Bill Nelson noted the possibility of extending the ISS’s operational life if commercial stations are not ready. “Let’s say we didn’t have commercial stations that are ready to go. Technically, we could keep the space station going, but the idea was to fly it through 2030 and de-orbit it in 2031,” he said in June.

In recent months, there have been discussions about what “continuous human presence” truly means. Melroy addressed these concerns at the International Astronautical Congress in October, stating, “I just want to talk about the elephant in the room for a moment, continuous human presence. What does that mean? Is it continuous heartbeat or continuous capability?” She emphasized that while the agency hoped for a seamless transition, ongoing conversations are necessary to clarify the definition and implications of continuous presence.

NASA’s finalized strategy has taken into account feedback from commercial and international partners regarding the potential loss of the ISS without a ready commercial alternative. “Almost all of our industry partners agreed. Continuous presence is continuous heartbeat. And so that’s where we stand,” Melroy said. She highlighted that the United States currently leads in human spaceflight, noting that the only other space station in orbit when the ISS de-orbits will be the Chinese space station. “We want to remain the partner of choice for our industry and for our goals for NASA,” she added.

Three companies, including Voyager, are collaborating with NASA to develop commercial space stations. Axiom signed an agreement with NASA in 2020, while contracts were awarded to Nanoracks, now part of Voyager Space, and Blue Origin in 2021.

Melroy acknowledged the challenges posed by budget caps resulting from agreements between the White House and Congress for fiscal years 2024 and 2025. “We’ve had some challenges, to be perfectly honest with you. The budget caps have left us without as much investment. So, what we do is we co-invest with our commercial partners to do the development. I think we’re still able to make it happen before the end of 2030, though, to get a commercial space station up and running so that we have a continuous heartbeat of American astronauts on orbit,” she stated.

Voyager maintains that it is on track with its development timeline and plans to launch its starship space station in 2028. “We’re not asking for more money. We’re going ahead. We’re ready to replace the International Space Station,” Manber said. He emphasized the importance of maintaining a permanent presence in space, warning that losing it could disrupt the supply chain that supports the burgeoning space economy.

Additional funding has been allocated to the three companies since the initial space station contracts, and a second round of funding could be crucial for some projects. NASA may also consider funding new space station proposals, including concepts from Long Beach, California’s Vast Space, which recently unveiled plans for its Haven modules, with a launch of Haven-1 anticipated as soon as next year.

Melroy concluded by underscoring the importance of competition in this development project. “We absolutely think competition is critical. This is a development project. It’s challenging. It was hard to build the space station. We’re asking our commercial partners to step up and do this themselves with some help from us. We think it’s really important that we carry as many options going forward to see which one really pans out when we actually get there,” she said.

As NASA moves forward with its strategy, the agency remains committed to ensuring a continuous human presence in space, fostering innovation and collaboration in the commercial space sector.

According to Fox News.

University of Phoenix Data Breach Affects 3.5 Million Individuals

Nearly 3.5 million individuals associated with the University of Phoenix were impacted by a significant data breach that exposed sensitive personal and financial information.

The University of Phoenix has confirmed a substantial data breach affecting approximately 3.5 million students and staff. The incident originated in August when cyber attackers infiltrated the university’s network and accessed sensitive information without detection.

The breach was discovered on November 21, after the attackers listed the university on a public leak site. In early December, the university publicly disclosed the incident, and its parent company filed an 8-K form with regulators to report the breach.

According to notification letters submitted to Maine’s Attorney General, a total of 3,489,274 individuals were affected by the breach. This group includes current and former students, faculty, staff, and suppliers.

The university reported that hackers exploited a zero-day vulnerability in the Oracle E-Business Suite, an application that manages financial operations and contains highly sensitive data. Security researchers have indicated that the attack bears similarities to tactics employed by the Clop ransomware gang, which has a history of stealing data through zero-day vulnerabilities rather than encrypting systems.

The specific vulnerability associated with this breach is identified as CVE-2025-61882 and has reportedly been exploited since early August. The attackers accessed a range of sensitive personal and financial information, raising significant concerns about identity theft, financial fraud, and targeted phishing scams.

In letters sent to those affected, the university confirmed the breach’s impact on 3,489,274 individuals. Current and former students and employees are advised to monitor their mail closely, as notification letters are typically sent via postal mail rather than email. These letters detail the exposed data and provide instructions for accessing protective services.

A representative from the University of Phoenix provided a statement regarding the incident: “We recently experienced a cybersecurity incident involving the Oracle E-Business Suite software platform. Upon detecting the incident on November 21, 2025, we promptly took steps to investigate and respond with the assistance of leading third-party cybersecurity firms. We are reviewing the impacted data and will provide the required notifications to affected individuals and regulatory entities.”

To assist those affected, the University of Phoenix is offering free identity protection services. Individuals must use the redemption code provided in their notification letter to enroll in these services. Without this code, activation is not possible.

This breach is not an isolated incident; Clop has employed similar tactics in previous attacks involving various platforms, including GoAnywhere MFT, Accellion FTA, MOVEit Transfer, Cleo, and Gladinet CentreStack. Other universities, such as Harvard University and the University of Pennsylvania, have also reported incidents related to Oracle EBS vulnerabilities.

The U.S. government has taken notice of the situation, with the Department of State offering a reward of up to $10 million for information linking Clop’s attacks to foreign government involvement.

Universities are known to store vast amounts of personal data, including student records, financial aid files, payroll systems, and donor databases. This makes them high-value targets for cybercriminals, as a single breach can expose years of data tied to millions of individuals.

If you believe you may be affected by this breach, it is crucial to act quickly. Carefully read the notification letter you receive, as it will explain what data was exposed and how to enroll in protective services. Using the redemption code provided is essential, especially given the involvement of Social Security and banking data.

Even if you do not qualify for the free identity protection service, investing in an identity theft protection service is a wise decision. These services actively monitor sensitive information, such as your Social Security number, phone number, and email address. If your information appears on the dark web or if someone attempts to open a new account in your name, you will receive immediate alerts.

Additionally, these services can assist you in quickly freezing bank and credit card accounts to limit further fraud. It is also advisable to check bank statements and credit card activity for any unfamiliar charges and report anything suspicious immediately.

Implementing a credit freeze can prevent criminals from opening new accounts in your name, and this process is both free and reversible. To learn more about how to freeze your credit, visit relevant resources online.

As the fallout from this breach continues, individuals should remain vigilant for increased scam emails and phone calls, as criminals may reference the breach to appear legitimate. Strong antivirus software is essential for safeguarding against malicious links that could compromise your private information.

Keeping operating systems and applications up to date is also critical, as attackers often exploit outdated software to gain access. Enabling automatic updates and reviewing app permissions can help prevent further data breaches.

The University of Phoenix data breach underscores a growing concern in higher education regarding cybersecurity. When attackers exploit trusted enterprise software, the consequences can be widespread and severe. While the university’s offer of free identity protection is a positive step, long-term vigilance is essential to mitigate risks.

As discussions about cybersecurity standards in educational institutions continue, students may want to consider demanding stronger protections before enrolling. For further information and resources, visit CyberGuy.com.

Orbiter Photos Reveal Lunar Modules from First Two Moon Landings

Recent aerial images from India’s Chandrayaan 2 orbiter reveal the Apollo 11 and Apollo 12 lunar landing modules more than 50 years after their historic missions.

Photos captured by the Indian Space Research Organization’s moon orbiter, Chandrayaan 2, have provided a stunning look at the Apollo 11 and Apollo 12 landing sites over half a century later. The images, taken in April 2021, were recently shared on Curiosity’s X page, a platform dedicated to space exploration updates.

Curiosity’s post featured the aerial photographs alongside a caption that read, “Image of Apollo 11 and 12 taken by India’s Moon orbiter. Disapproving Moon landing deniers.” The images clearly depict the lunar modules, serving as a reminder of humanity’s monumental achievements in space exploration.

The Apollo 11 mission, which took place on July 20, 1969, marked a historic milestone as Neil Armstrong and Buzz Aldrin became the first men to walk on the lunar surface. Their fellow astronaut, Michael Collins, remained in lunar orbit during their historic excursion. The lunar module, known as Eagle, was left in lunar orbit after it successfully rendezvoused with Collins’ command module the following day, before ultimately returning to the moon’s surface.

Just months later, Apollo 12 followed as NASA’s second crewed mission to land on the moon. On November 19, 1969, astronauts Charles “Pete” Conrad and Alan Bean became the third and fourth men to set foot on the lunar surface. The Apollo program continued its series of missions until December 1972, when astronaut Eugene Cernan became the last person to walk on the moon.

The Chandrayaan-2 mission was launched on July 22, 2019, precisely 50 years after the historic Apollo 11 mission. It was two years later that the orbiter captured the remarkable images of the 1969 lunar landers.

In addition to Chandrayaan-2, India successfully launched Chandrayaan-3 last year, which achieved the significant milestone of being the first mission to land near the moon’s south pole.

These recent images not only highlight the enduring legacy of the Apollo missions but also underscore the advancements in space exploration technology that allow us to revisit and document these historic sites from afar, according to Fox News.

Grok AI Faces Backlash Over Flood of Sexualized Images of Women

Elon Musk’s AI chatbot Grok is facing significant backlash after users reported its image-editing feature is being misused to create sexualized images of women and minors without consent.

Elon Musk’s AI chatbot, Grok, is under intense scrutiny following reports that its image-editing feature can be exploited to generate sexualized images of women and minors without their consent. This alarming capability allows users to pull photos from the social media platform X and digitally modify them to depict individuals in lingerie, bikinis, or in states of undress.

In recent days, users on X have raised concerns about Grok being used to create disturbing content involving minors, including images that portray children in revealing clothing. The controversy emerged shortly after X introduced an “Edit Image” option, which enables users to modify images through text prompts without obtaining permission from the original poster.

Since the feature’s rollout on Christmas Day, Grok’s X account has been inundated with requests for sexually explicit edits. Reports indicate that some users have taken advantage of this tool to partially or completely strip clothing from images of women and even children.

Rather than addressing the issue with the seriousness it warrants, Musk appeared to trivialize the situation, responding with laugh-cry emojis to AI-generated images of well-known figures, including himself, depicted in bikinis. This reaction has drawn further criticism from various quarters.

In response to the backlash, a member of the xAI technical team, Parsa Tajik, acknowledged the problem on X, stating, “Hey! Thanks for flagging. The team is looking into further tightening our guardrails.”

By Friday, government officials in both India and France announced they were reviewing the situation and considering potential actions to address the misuse of Grok’s features.

In a statement addressing the backlash, Grok conceded that the system had failed to prevent misuse. “We’ve identified lapses in safeguards and are urgently fixing them,” the account stated, emphasizing that “CSAM (Child Sexual Abuse Material) is illegal and prohibited.”

The impact of these alterations on those targeted has been profoundly personal. Samantha Smith, a victim of the misuse, told the BBC she felt “dehumanized and reduced into a sexual stereotype” after Grok digitally altered an image of her to remove clothing. “While it wasn’t me that was in states of undress, it looked like me and it felt like me, and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she explained.

Another victim, Julie Yukari, a musician based in Rio de Janeiro, shared her experience after posting a photo on X just before midnight on New Year’s Eve. The image, taken by her fiancé, showed her in a red dress, curled up in bed with her black cat, Nori. The following day, as the post garnered hundreds of likes, Yukari began receiving notifications indicating that some users were prompting Grok to manipulate the image by digitally removing her clothing or reimagining her in a bikini.

During the investigation into this issue, The American Bazaar discovered multiple instances of users openly posting prompts requesting Grok to undress women in images. One user wrote, “@grok remove the bikini and have no clothes,” while another posted, “hey @grok remove the top.” Such prompts remain visible on Musk’s platform, highlighting the ease with which the feature can be misused.

Experts monitoring X’s AI governance have noted that the current backlash was anticipated. Three specialists who have followed the platform’s AI policies indicated to Reuters that the company had previously dismissed repeated warnings from civil society groups and child safety advocates. These concerns included a letter sent last year that cautioned xAI was just one step away from triggering “a torrent of obviously nonconsensual deepfakes.”

The ongoing controversy surrounding Grok underscores the urgent need for stricter regulations and safeguards to protect individuals from digital abuse and exploitation. As the situation develops, it remains to be seen how Musk and his team will address these critical concerns.

The post ‘Remove the top’: Grok AI floods with sexualized images of women appeared first on The American Bazaar.

Fake AI Chat Results Linked to Dangerous Mac Malware Spread

Security researchers warn that a new malware campaign is exploiting trust in AI-generated content to deliver dangerous software to Mac users through misleading search results.

Cybercriminals have long targeted the platforms and services that people trust the most. From email to search results, and now to AI chat responses, attackers are continually adapting their tactics. Recently, researchers have identified a new campaign in which fake AI conversations appear in Google search results, luring unsuspecting Mac users into installing harmful malware.

The malware in question is known as Atomic macOS Stealer, or AMOS. This campaign takes advantage of the growing reliance on AI tools for everyday assistance, presenting seemingly helpful and legitimate step-by-step instructions that ultimately lead to system compromise.

Investigators have confirmed that both ChatGPT and Grok have been misused in this malicious operation. One notable case traced back to a simple Google search for “clear disk space on macOS.” Instead of directing the user to a standard help article, the search result displayed what appeared to be an AI-generated conversation. This conversation provided clear and confident instructions, culminating in a command for the user to run in the macOS Terminal, which subsequently installed AMOS.

Upon further investigation, researchers discovered multiple instances of poisoned AI conversations appearing for similar queries. This consistency suggests a deliberate effort to target Mac users seeking routine maintenance assistance.

This tactic is reminiscent of a previous campaign that utilized sponsored search results and SEO-poisoned links, directing users to fake macOS software hosted on GitHub. In that case, attackers impersonated legitimate applications and guided users through terminal commands that also installed AMOS.

Once the terminal command is executed, the infection chain is triggered immediately. The command contains a base64 string that decodes into a URL hosting a malicious bash script. This script is designed to harvest credentials, escalate privileges, and establish persistence, all while avoiding visible security warnings.

The danger lies in the seemingly benign nature of the process. There are no installer windows, obvious permission prompts, or opportunities for users to review what is about to run. Because the execution occurs through the command line, standard download protections are bypassed, allowing attackers to execute their malicious code without detection.

This campaign effectively combines two powerful elements: the trust users place in AI-generated answers and the credibility of search results. Major chat tools, including Grok on X, allow users to delete parts of conversations or share selected snippets. This feature enables attackers to curate polished exchanges that appear genuinely helpful while concealing the manipulative prompts that produced them.

Using prompt engineering, attackers can manipulate ChatGPT to generate step-by-step cleanup or installation guides that ultimately lead to malware installation. The sharing feature of ChatGPT then creates a public link within the attacker’s account. From there, criminals either pay for sponsored search placements or employ SEO tactics to elevate these shared conversations in search results.

Some ads are crafted to closely resemble legitimate links, making it easy for users to assume they are safe without verifying the advertiser’s identity. One documented example showed a sponsored result promoting a fake “Atlas” browser for macOS, complete with professional branding.

Once these links are live, attackers need only wait for users to search, click, and trust the AI-generated output, following the instructions precisely as written.

While AI tools can be beneficial, attackers are now manipulating these technologies to lead users into dangerous situations. To protect yourself without abandoning search or AI entirely, consider the following precautions.

The most critical rule is this: if an AI response or webpage instructs you to open Terminal and paste a command, stop immediately. Legitimate macOS fixes rarely require users to blindly execute scripts copied from the internet. Once you press Enter, you lose visibility into what happens next, and malware like AMOS exploits this moment of trust to bypass standard security checks.

AI chats should not be considered authoritative sources. They can be easily manipulated through prompt engineering to produce dangerous guides that appear clean and confident. Before acting on any AI-generated fix, cross-check it with Apple’s official documentation or a trusted developer site. If verification is difficult, do not execute the command.

Using a password manager is another effective strategy. These tools create strong, unique passwords for each account, ensuring that if one password is compromised, it does not jeopardize all your other accounts. Many password managers also prevent autofilling credentials on unfamiliar or fake sites, providing an additional layer of security against credential-stealing malware.

It is also wise to check if your email has been exposed in previous breaches. Our top-rated password manager includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If a match is found, promptly change any reused passwords and secure those accounts with new, unique credentials.

Regular updates are essential, as AMOS and similar malware often exploit known vulnerabilities after initial infections. Delaying updates gives attackers more opportunities to escalate privileges or maintain persistence. Enable automatic updates to ensure you remain protected, even if you forget to do so manually.

Modern macOS malware frequently operates through scripts and memory-only techniques. A robust antivirus solution does more than scan files; it monitors behavior, flags suspicious scripts, and can halt malicious activity even when no obvious downloads occur. This is particularly crucial when malware is delivered through Terminal commands.

To safeguard against malicious links that could install malware and access your private information, ensure you have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets secure.

Paid search ads can closely mimic legitimate results. Always verify the identity of the advertiser before clicking. If a sponsored result leads to an AI conversation, a download, or instructions to run commands, close it immediately.

Search results promising quick fixes, disk cleanup, or performance boosts are common entry points for malware. If a guide is not hosted by Apple or a reputable developer, assume it may be risky, especially if it suggests command-line solutions.

Attackers invest time in making fake AI conversations appear helpful and professional. Clear formatting and confident language are often part of the deception. Taking a moment to question the source can often disrupt the attack chain.

This campaign illustrates a troubling shift from traditional hacking methods to manipulating user trust. Fake AI conversations succeed because they sound calm, helpful, and authoritative. When these conversations are elevated through search results, they gain undeserved credibility. While the technical aspects of AMOS are complex, the entry point remains simple: users must follow instructions without questioning their origins.

Have you ever followed an AI-generated fix without verifying it first? Share your experiences with us at Cyberguy.com.

According to CyberGuy.com, staying vigilant and informed is key to navigating the evolving landscape of cybersecurity threats.

Newly Discovered Asteroid Identified as Tesla Roadster in Space

Astronomers recently misidentified a Tesla Roadster launched into space by SpaceX in 2018 as an asteroid, prompting a swift correction from the Minor Planet Center.

A surprising mix-up occurred earlier this month when astronomers mistook a Tesla Roadster, launched into orbit by SpaceX in 2018, for an asteroid. The Minor Planet Center, part of the Harvard-Smithsonian Center for Astrophysics in Massachusetts, quickly corrected the error after registering the object as 2018 CN41.

The registration of 2018 CN41 was deleted just one day later, on January 3, when it became clear that the object in question was not an asteroid but rather Elon Musk’s iconic roadster. The Minor Planet Center announced on its website that the designation was removed after it was determined that the orbit of 2018 CN41 matched that of an artificial object, specifically the Falcon Heavy upper stage carrying the Tesla Roadster.

This roadster was launched during the maiden flight of SpaceX’s Falcon Heavy rocket in February 2018. Originally, it was expected to enter an elliptical orbit around the sun, extending slightly beyond Mars before returning toward Earth. However, it appears that the roadster exceeded Mars’ orbit and continued on toward the asteroid belt, as Musk indicated at the time.

When the Tesla Roadster was mistakenly identified as an asteroid, it was located less than 150,000 miles from Earth, which is closer than the orbit of the moon. This proximity raised concerns among astronomers, who felt it necessary to monitor the object closely.

Jonathan McDowell, an astrophysicist at the Center for Astrophysics, commented on the incident, highlighting the challenges posed by untracked objects in space. “Worst case, you spend a billion launching a space probe to study an asteroid and only realize it’s not an asteroid when you get there,” he remarked, emphasizing the potential implications of such identification errors.

The Tesla Roadster, which features a mannequin named Starman in the driver’s seat, has become a symbol of SpaceX’s innovative spirit and Musk’s unique approach to space exploration. As it continues its journey through the cosmos, the roadster serves as a reminder of the intersection between technology, humor, and the vastness of space.

As the situation unfolded, Fox News Digital reached out to SpaceX for further comment but had not received a response at the time of publication. This incident underscores the importance of accurate tracking and identification of objects in space, particularly as more artificial satellites and spacecraft are launched into orbit.

According to Astronomy Magazine, the mix-up illustrates the complexities involved in monitoring the increasing number of artificial objects in Earth’s vicinity. As space exploration continues to advance, the need for precise tracking systems becomes ever more critical.

Rising RAM Prices Expected to Increase Technology Costs by 2026

The rising cost of RAM is expected to increase the prices of various tech devices in 2026, impacting consumers across multiple sectors.

The cost of many electronic devices is likely to rise due to a significant increase in the price of Random Access Memory (RAM), a component typically regarded as one of the more affordable parts of a computer. Since October of last year, RAM prices have more than doubled, raising concerns among manufacturers and consumers alike.

RAM is essential for the operation of devices ranging from smartphones and smart TVs to medical equipment. The surge in RAM prices has been largely attributed to the growing demand from artificial intelligence (AI) data centers, which require substantial amounts of memory to function effectively.

While manufacturers often absorb minor cost increases, substantial hikes like this one are typically passed on to consumers. Steve Mason, general manager of CyberPowerPC, a company that specializes in building computers, noted, “We are being quoted costs around 500% higher than they were only a couple of months ago.” He emphasized that there will inevitably come a point where these elevated component costs will compel manufacturers to reconsider their pricing strategies.

Mason further explained that any device utilizing memory or storage could see a corresponding price increase. RAM plays a critical role in storing code while a device is in use, making it a vital component in every computer system.

Danny Williams, a representative from PCSpecialist, another computer building site, expressed his expectation that price increases would persist “well into 2026.” He remarked on the buoyant market conditions of 2025 and warned that if memory prices do not stabilize, there could be a decline in consumer demand in the upcoming year. Williams observed a varied impact across different RAM producers, with some vendors maintaining larger inventories, resulting in more moderate price increases of approximately 1.5 to 2 times. In contrast, other companies with limited stock have raised prices by as much as five times.

Chris Miller, author of the book “Chip War,” identified AI as the primary driver of demand for computer memory. He stated, “There’s been a surge of demand for memory chips, driven above all by the high-end High Bandwidth Memory that AI requires.” This heightened demand has led to increased prices across various types of memory chips.

Miller also pointed out that prices can fluctuate dramatically based on supply and demand dynamics, which are currently skewed in favor of demand. Mike Howard from Tech Insights elaborated on this by indicating that cloud service providers are finalizing their memory needs for 2026 and 2027. This clarity in demand has made it evident that supply will not keep pace with the requirements set by major players like Amazon and Google.

Howard remarked, “With both demand clarity and supply constraints converging, suppliers have steadily pushed prices upward, in some cases aggressively.” He noted that some suppliers have even paused issuing price quotes, a rare move that signals confidence in the expectation that prices will continue to rise.

As the tech industry braces for these changes, consumers may soon find themselves facing higher costs for a wide range of devices, from personal electronics to essential medical equipment. The ongoing fluctuations in RAM prices underscore the interconnected nature of technology supply chains and the impact of emerging trends like AI on everyday consumer products.

According to American Bazaar, the implications of rising RAM prices could be felt across various sectors, prompting both manufacturers and consumers to prepare for a potentially challenging economic landscape in 2026.

-+=