Under Armour Data Breach Affects Millions of Users Worldwide

Under Armour is investigating a significant data breach affecting approximately 72 million customers, following the online posting of sensitive records by hackers.

Sportswear and fitness brand Under Armour is currently probing claims of a substantial data breach after customer records were discovered on a hacker forum. The breach came to light when millions of users received alerts indicating that their personal information may have been compromised.

While Under Armour maintains that its investigation is ongoing, cybersecurity experts analyzing the leaked data suggest it contains personal details that could be linked to customer purchases. The breach notification service Have I Been Pwned reported that the dataset includes email addresses associated with around 72 million individuals, prompting the organization to directly notify affected users.

The scale of this exposure has raised significant concerns regarding the potential misuse of consumer data long after a breach has occurred. The stolen data is reportedly tied to a ransomware attack that took place in November 2025, for which the Everest ransomware group claimed responsibility. This group attempted to extort Under Armour by threatening to leak internal files.

In January 2026, customer data from this incident surfaced on a popular hacking forum. Shortly thereafter, Have I Been Pwned obtained a copy of the data and began alerting affected users via email. Reports indicate that the seller claimed the stolen files originated from the November breach and included millions of customer records.

The leaked dataset is believed to encompass a wide range of personal information. While there has been no confirmation regarding the exposure of payment card details, the data remains highly valuable to cybercriminals. Compromised information may include names, email addresses, birth dates, and purchase histories, which can be exploited to create convincing scams.

Researchers have also identified email addresses belonging to Under Armour employees within the leaked data, increasing the risk of targeted phishing and business email compromise scams. An Under Armour spokesperson stated, “We are aware of claims that an unauthorized third party obtained certain data. Our investigation of this issue, with the assistance of external cybersecurity experts, is ongoing. Importantly, at this time, there’s no evidence to suggest this issue affected UA.com or systems used to process payments or store customer passwords. Any implication that sensitive personal information of tens of millions of customers has been compromised is unfounded. The security of our systems and data is a top priority for UA, and we take this issue very seriously.”

Even in the absence of passwords or payment details, this breach poses serious risks. Names, email addresses, birth dates, and purchase histories can be used to craft highly convincing phishing attempts. Cybercriminals often reference actual purchases or account details to gain the trust of their targets. Consequently, phishing emails related to this breach may appear legitimate and urgent.

Over time, exposed data can be combined with information from other breaches to create detailed identity profiles that are increasingly difficult to protect against. To determine if your email has been affected, visit the Have I Been Pwned website, which serves as the official source for this newly added dataset. Enter your email address to check if your information appears in the leak.

If you received a breach alert or suspect your information may be included, taking immediate action can help mitigate future risks. If you have reused the same password across multiple sites, it is advisable to change those passwords promptly. Even if Under Armour asserts that passwords were not compromised, exposed email addresses can be used in follow-up attacks.

Utilizing a password manager can simplify this process by generating strong, unique passwords for each account and securely storing them. This way, a single breach cannot jeopardize multiple accounts. Additionally, check if your email has been exposed in previous breaches. Many password managers now include a built-in breach scanner that verifies whether your email address or passwords have appeared in known leaks. If you find a match, change any reused passwords immediately and secure those accounts with new, unique credentials.

Cybercriminals often act swiftly following a breach. As a result, emails that seem to originate from Under Armour or other fitness brands may appear in your inbox. Exercise caution with messages claiming there is an issue with your account or a recent purchase. Avoid clicking links or opening attachments in unexpected emails; instead, visit the company’s official website directly if you need to verify your account.

Employing robust antivirus software can also help block malicious links and attachments before they can cause harm. To protect yourself from harmful links that may install malware and potentially access your private information, ensure you have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, safeguarding your personal information and digital assets.

Implementing two-factor authentication (2FA) adds an additional layer of security. Even if someone obtains your password, they would still require a second step to log in. Start by enabling 2FA for your email accounts, then extend it to shopping, fitness, and financial accounts. This simple measure can prevent many account takeover attempts linked to breached data.

After a breach, attackers frequently test stolen email addresses across various sites, which can trigger password reset emails that you did not request. Pay close attention to these alerts. If you receive one, secure the account immediately by changing the password and reviewing recent activity.

The Under Armour data breach serves as a reminder that even major global brands can become targets. While payment systems appear unaffected, the exposure of personal data still presents long-term risks for millions of customers. Data breaches often unfold over time, and what begins as leaked records can later fuel scams, identity theft, and targeted attacks. Remaining vigilant now can help reduce the likelihood of more significant issues in the future.

For further information, visit Cyberguy.com, where you can find expert-reviewed password managers, antivirus solutions, and data removal services to help protect your personal information.

According to CyberGuy, the Under Armour data breach highlights the ongoing risks associated with data security in the digital age.

Elon Musk Considers Company Merger Ahead of SpaceX IPO

Elon Musk is considering a merger of his companies, including SpaceX and xAI, as the rocket manufacturer prepares for a significant IPO this year.

Elon Musk, the CEO of Tesla, is reportedly exploring the possibility of merging his various companies, including SpaceX and xAI. This move comes in the wake of his decision to utilize Tesla funds to support xAI, raising questions among investors about the potential synergies between Musk’s ventures in space exploration, autonomous driving, and artificial intelligence.

According to a report by Bloomberg, SpaceX is in discussions regarding a merger with Tesla, Musk’s electric vehicle company. Gene Munster, a Tesla shareholder and managing partner at xAI investor Deepwater Asset Management, expressed optimism about the merger’s likelihood, stating, “I think it’s highly likely that (xAI) ends up with one of the two parties.”

As SpaceX prepares for a major public offering scheduled for this year, the potential merger with xAI could consolidate Musk’s diverse portfolio, which includes rockets, Starlink satellites, the X social media platform, and the Grok chatbot. This consolidation could streamline operations and enhance strategic coherence across Musk’s enterprises, according to sources familiar with the discussions and regulatory filings.

Dennis Dick, chief market strategist at Stock Trader Network, commented on Musk’s expansive business interests, noting, “Musk has too many separate companies. A major risk thesis for Tesla is that Musk is spreading himself out too much. As a Tesla shareholder, I applaud further consolidation.”

If the merger between SpaceX and xAI proceeds, it is expected that xAI shares would be exchanged for SpaceX shares. This consolidation could represent a significant shift in how Musk manages his extensive business empire, potentially allowing for greater integration of technologies developed across his various companies.

By centralizing operations, Musk could accelerate innovation and streamline decision-making processes, reducing redundancies in research, development, and operations. For investors, a unified structure may clarify growth prospects and simplify valuations, addressing concerns about Musk’s divided attention among multiple high-profile ventures.

From a competitive standpoint, merging these assets could strengthen SpaceX’s position in emerging technology markets, particularly in artificial intelligence and autonomous systems. By aligning expertise, talent, and technological capabilities under one organizational umbrella, Musk may be better equipped to tackle ambitious projects that span multiple industries, including aerospace, defense, and AI-driven commercial applications.

Incorporating xAI into SpaceX’s operations could also enhance the company’s prospects for securing contracts with the Pentagon, which has been actively seeking to increase AI adoption within military networks. Caleb Henry, an analyst at Quilty Analytics, highlighted this potential advantage, noting that the merger could position SpaceX favorably in the defense sector.

However, merging different corporate cultures, compliance requirements, and financial structures could pose challenges. If not managed carefully, these complexities could create friction or slow down execution, impacting both short-term performance and long-term strategic outcomes. How Musk navigates these challenges will likely play a crucial role in the success of the merger.

Ultimately, the potential consolidation of Musk’s companies reflects his ambition to create a cohesive ecosystem of interrelated technologies. This strategy could position SpaceX and his other ventures for a new era of innovation and market influence, although the outcome remains uncertain and contingent upon regulatory approvals, investor support, and effective execution.

The broader implications of such a merger could reshape investor perceptions of Musk’s ventures, potentially attracting capital from those interested in a unified tech ecosystem. Market reactions may vary based on the effectiveness of the integration process, and analysts will likely debate whether the potential synergies outweigh the risks associated with overconcentration. Additionally, this move could prompt competitors to reevaluate their strategies, considering partnerships or mergers to remain competitive in overlapping sectors.

As the situation develops, stakeholders will be closely monitoring Musk’s next steps and the potential impact on the tech landscape.

According to Bloomberg, the discussions surrounding the merger are ongoing, and the final outcome will depend on various factors, including regulatory approvals and investor sentiment.

Humanoid Robot Designs Building, Making Architectural History

Ai-Da Robot has made history as the first humanoid robot to design a building, presenting a modular housing concept for future lunar and Martian bases at the Utzon Center in Denmark.

At the Utzon Center in Denmark, Ai-Da Robot, recognized as the world’s first ultra-realistic robot artist, has achieved a groundbreaking milestone by becoming the first humanoid robot to design a building. The project, titled Ai-Da: Space Pod, introduces a modular housing concept intended for future bases on the Moon and Mars.

This innovative endeavor marks a significant shift in Ai-Da’s capabilities, moving from creating art to conceptualizing physical spaces for both humans and robots. Previously, Ai-Da garnered attention for her work in drawing, painting, and performance art, which sparked global discussions about the role of robots in creative fields.

The exhibition “I’m not a robot,” currently on display at the Utzon Center, runs through October and delves into the creative potential of machines. As robots increasingly demonstrate the ability to think and create independently, visitors to the exhibition can engage with Ai-Da’s drawings, paintings, and architectural designs. The exhibition also features a glimpse into Ai-Da’s creative process through sketches, paintings, and a video interview.

Ai-Da is not merely a digital avatar or animation; she possesses camera eyes, advanced AI algorithms, and a robotic arm that enables her to draw and paint in real time. Developed in Oxford and constructed in Cornwall in 2019, Ai-Da’s versatility spans multiple disciplines, including painting, sculpture, poetry, performance, and now architectural design.

Aidan Meller, the creator of Ai-Da and Director of Ai-Da Robot, explains the significance of the Space Pod concept. “Ai-Da presents a concept for a shared residential area called Ai-Da: Space Pod, foreshadowing a future where AI becomes an integral part of architecture,” he states. “With intelligent systems, a building will be able to sense and respond to its occupants, adjusting light, temperature, and digital interfaces according to needs and moods.”

The Space Pod design is intentionally modular, allowing each unit to connect with others through corridors, fostering a shared residential environment. Ai-Da’s artistic vision includes a home and studio suitable for both humans and robots. According to her team, these designs could evolve into fully realized architectural models through 3D renderings and construction, potentially adapting to planned Moon or Mars base camps.

While the concept primarily targets future extraterrestrial bases, it is also feasible to create a prototype on Earth. This aspect is particularly relevant as space agencies prepare for extended missions beyond our planet. Meller emphasizes the timeliness of the project, noting, “With our first crewed Moon landing in 50 years scheduled for 2027, Ai-Da: Space Pod is a simple unit connected to other Pods via corridors.” He adds, “Ai-Da is a humanoid designing homes, which raises questions about the future of architecture as powerful AI systems gain greater agency.”

The exhibition aims to provoke thought and discomfort regarding the rapid pace of technological advancement. Meller points to developments in emotional recognition through biometric data, CRISPR gene editing, and brain-computer interfaces, each carrying both promise and ethical risks. He references dystopian themes from literature, such as Aldous Huxley’s “Brave New World,” and cautions about the potential misuse of powerful technologies.

Line Nørskov Davenport, Director of Exhibitions at the Utzon Center, describes Ai-Da as a “confrontational” figure, stating, “The very fact that she exists is confrontational. Ai-Da is an AI shaker, a conversation starter.” This exhibition transcends the realms of robotics and space exploration, highlighting the swift transition of AI from a creative tool to a decision-maker in architecture and housing.

As AI begins to influence the design of living spaces, critical questions about control, ethics, and accountability arise. If a robot can conceptualize homes for the Moon, it raises concerns about how such technology might shape building functionality on Earth.

Ai-Da’s work challenges the notion of what is possible for humanoid robots and their role in society. Her presence in a major cultural institution ignites discussions about creativity, technology, and responsibility. As the boundaries between human and machine continue to blur, the implications of AI’s involvement in architecture and design become increasingly significant.

The question remains: if AI can design the homes of our future, how much creative control should humans be willing to relinquish? This inquiry invites ongoing dialogue about the intersection of technology and human creativity.

According to CyberGuy, Ai-Da’s Space Pod serves as a catalyst for critical reflection on the evolving relationship between humans and artificial intelligence.

Wolf Species Extinct for 12,500 Years Resurrected, Company Claims

A Dallas-based company claims to have resurrected the dire wolf, an extinct species that last roamed the Earth over 12,500 years ago, using advanced genetic technologies.

A U.S. company, Colossal Biosciences, has announced a groundbreaking achievement: the revival of the dire wolf, a species that has been extinct for more than 12,500 years. The dire wolf, made famous by the HBO series “Game of Thrones,” is said to have been brought back to life through innovative genome-editing and cloning techniques.

According to Colossal Biosciences, this marks the world’s first successful instance of what they term a “de-extincted animal.” However, some experts have raised concerns, suggesting that the company may have merely genetically modified existing wolves rather than truly resurrecting the extinct apex predator.

Historically, dire wolves roamed the American midcontinent during the Ice Age. The oldest confirmed fossil of a dire wolf, dating back approximately 250,000 years, was discovered in the Black Hills of South Dakota. In “Game of Thrones,” these wolves are portrayed as larger and more intelligent than their modern counterparts, exhibiting fierce loyalty to the Stark family, a central noble house in the series.

Colossal’s project has produced three litters of dire wolves, including two adolescent males named Romulus and Remus, and a female puppy called Khaleesi. The scientists utilized blood cells from a living gray wolf and employed CRISPR technology—short for “clustered regularly interspaced short palindromic repeats”—to make genetic modifications at 20 different sites. According to Beth Shapiro, Colossal’s chief scientist, these modifications were designed to replicate traits believed to have helped dire wolves survive in cold climates during the Ice Age, such as larger body sizes and longer, fuller, light-colored fur.

Of the 20 genome edits made, 15 correspond to genes identified in actual dire wolves. The ancient DNA used in the project was extracted from two fossils: a tooth from Sheridan Pit, Ohio, approximately 13,000 years old, and an inner ear bone from American Falls, Idaho, dating back around 72,000 years.

The genetic material was transferred into an egg cell from a domestic dog, and the embryos were subsequently implanted into surrogate domestic dogs. After a gestation period of 62 days, the genetically engineered pups were born.

Ben Lamm, CEO of Colossal Biosciences, described this achievement as a significant milestone, emphasizing that it represents the first of many examples showcasing the effectiveness of the company’s comprehensive de-extinction technology. “It was once said, ‘any sufficiently advanced technology is indistinguishable from magic,’” Lamm stated. “Today, our team gets to unveil some of the magic they are working on and its broader impact on conservation.”

Colossal Biosciences has previously announced similar initiatives aimed at genetically altering cells from living species to create animals resembling other extinct species, such as woolly mammoths and dodos. In addition to the dire wolves, the company recently reported the birth of two litters of cloned red wolves, which are critically endangered. This development is seen as evidence of the potential for conservation through de-extinction technology.

During a recent announcement, Lamm mentioned that the team had met with officials from the Interior Department in late March regarding their projects. Interior Secretary Doug Burgum praised the work on social media, calling it a “thrilling new era of scientific wonder.” However, some scientists have expressed skepticism about the feasibility of restoring extinct species.

Corey Bradshaw, a professor of global ecology at Flinders University in Australia, voiced concerns about the claims made by Colossal Biosciences. “So yes, they have slightly genetically modified wolves, maybe, and that’s probably the best that you’re going to get,” Bradshaw remarked. “And those slight modifications seem to have been derived from retrieved dire wolf material. Does that make it a dire wolf? No. Does it make a slightly modified gray wolf? Yes. And that’s probably about it.”

Colossal Biosciences asserts that the wolves are currently thriving in a secure 2,000-acre ecological preserve in Texas, which is certified by the American Humane Society and registered with the USDA. Looking ahead, the company plans to restore the species in secure and expansive ecological preserves, potentially on indigenous land.

This ambitious project raises important questions about the future of conservation and the ethical implications of de-extinction efforts. As the debate continues, the work of Colossal Biosciences may pave the way for new approaches to preserving biodiversity.

According to Fox News, the implications of this project extend beyond mere scientific curiosity, potentially influencing conservation strategies for endangered species in the years to come.

Samsung Galaxy S26 Ultra Leaks Reveal February 2026 Launch Details

Leaks suggest that Samsung will unveil its Galaxy S26 series, including the Galaxy S26 Ultra, during a Galaxy Unpacked event on February 25, 2026, with a likely on-sale date in March.

Samsung enthusiasts are gearing up for one of the most significant smartphone launches of 2026, as recent leaks and industry hints indicate a Galaxy Unpacked event scheduled for February 25, 2026. During this event, Samsung is expected to unveil its next-generation Galaxy S26 lineup, which includes the Galaxy S26, Galaxy S26+, and Galaxy S26 Ultra.

Traditionally, Samsung kicks off its flagship smartphone cycle with the Galaxy S series, typically announcing new models in January or February. However, this year’s unveiling appears to be more than a month later than usual, a shift that has generated considerable excitement among fans eager to see what innovations the South Korean tech giant will introduce.

Insider tipster Evan Blass recently shared a leaked invitation on X, confirming the February 25 launch date for the Galaxy Unpacked event. The teaser image also hints at the simultaneous launch of Samsung’s next-generation Galaxy Buds 4 and Buds 4 Pro, making this event a significant occasion for multiple new product introductions. This confirmed date aligns with various recent leaks and supports ongoing rumors regarding the phone’s launch timeline.

The Galaxy S26 series is anticipated to follow a familiar three-model structure: standard, Plus, and Ultra. This return to a traditional format comes after the Galaxy S25 Edge was reportedly dropped due to lackluster sales.

In terms of display and design, all models are expected to feature high-quality AMOLED displays with 120Hz refresh rates, improved brightness, and enhanced viewing angles. Some variants may also incorporate new privacy display technology to protect on-screen content from prying eyes.

Performance-wise, the base Galaxy S26 and S26+ may utilize Samsung’s in-house Exynos 2600 chipset, while the S26 Ultra is likely to be powered by Qualcomm’s Snapdragon 8 Elite Gen 5, a robust flagship processor.

Camera capabilities are also set to receive a significant upgrade, with early reports indicating that the Ultra model will feature a 200-megapixel main sensor. This will be complemented by advanced cropping or zoom solutions and wider aperture lenses designed to enhance low-light photography.

Additionally, leaked information suggests that the entire Galaxy S26 range may support upgraded wireless charging and MagSafe-style accessories through Qi2 compatibility.

While Samsung has yet to officially confirm the launch dates, leaks from various sources, including tipsters like Ice Universe, suggest the following timeline:

Galaxy Unpacked Event: February 25, 2026

Pre-Orders Start: Around February 26

Pre-Sale Period: Early March

Official On-Sale Date: Around March 11, 2026

These dates may vary slightly by region, but the overall trend indicates a late February introduction followed by a March market debut.

As for pricing, the expected costs for the Galaxy S26 series in India are as follows:

The Galaxy S26 is likely to start at around ₹84,999, with a base storage option of 256GB, as the 128GB variant may be discontinued. Higher storage options, such as 512GB, are expected to be priced above the entry-level model.

The Galaxy S26 Plus is anticipated to have a starting price of approximately ₹1,04,999, with the base 256GB variant remaining similar to last year’s model. The 512GB variant is likely to be priced higher than previous Plus models.

For the Galaxy S26 Ultra, the expected starting price is around ₹1,34,999. The 256GB and 512GB versions may be slightly cheaper than their S25 Ultra counterparts, while the 1TB variant is expected to maintain a price similar to last year’s Ultra model.

The delay in the launch of the Galaxy S26 series is noteworthy for fans and potential buyers. Historically, Samsung has unveiled its Galaxy S-series smartphones in late January or early February, as seen with the Galaxy S25 launch in January 2025. This year’s later debut may be attributed to strategic changes in the lineup and product planning.

This delay has heightened anticipation, with fans speculating that Samsung might be fine-tuning hardware upgrades, storage options, and design features. As the February 25 event approaches, more detailed leaks regarding specifications and pricing are expected to surface.

For tech enthusiasts and smartphone buyers, the late February launch offers a compelling reason to postpone upgrades until Samsung’s next flagship arrives. With anticipated improvements across display, chipset, camera, battery, and AI features, the Galaxy S26 series is poised to compete vigorously in the premium smartphone segment.

The introduction of new Galaxy Buds at the same event further enhances the value of the February 25 Unpacked, making it one of the most eagerly awaited tech events of early 2026.

These insights into the upcoming Galaxy S26 series are based on leaks and industry speculation, according to The Sunday Guardian.

Startup Bazaar to Host Events in UAE on January 31 and February 2

The American Bazaar’s Startup Bazaar series will debut in the UAE with events in Abu Dhabi and Dubai, focusing on AI and emerging technologies.

The American Bazaar is set to launch its flagship Startup Bazaar series in the United Arab Emirates, featuring back-to-back events on January 31, 2026, in Abu Dhabi and February 2, 2026, in Dubai. These events aim to unite startup founders, investors, and leaders in the tech ecosystem to explore and showcase innovations in artificial intelligence and other emerging technologies.

Positioned at the intersection of technology, investment, and policy, the Startup Bazaar events promise a vibrant mix of ideas, discussions, and networking opportunities that will help shape the future of AI-driven entrepreneurship.

The Abu Dhabi event will take place on January 31, while the Dubai event is scheduled for February 2. Both events are organized in partnership with Talrop, an India-based technology and innovation company dedicated to fostering startups, developing digital products, and nurturing tech talent across the Gulf Cooperation Council (GCC) region.

These gatherings are expected to attract U.S.-based investors alongside their counterparts from the GCC and India, as well as senior executives and high-growth founders. This diverse mix will facilitate a unique cross-border exchange of insights and perspectives.

As the UAE continues to establish itself as a global hub for advanced technologies, the Startup Bazaar will highlight innovations in AI, deep tech, and other frontier technologies, particularly in the energy, healthtech, and pharmaceutical sectors. These discussions are anticipated to contribute to economic transformation and create tangible impacts in the region.

“The UAE is emerging as one of the most exciting and execution-focused AI startup ecosystems globally,” said Sanjay Puri, a member of the U.S. investor delegation attending the events. “This delegation presents a valuable opportunity to engage with founders, universities, family offices, and industry leaders like G42, exploring how talent, capital, and policy are converging at scale. I am particularly interested in how the region is translating research and ambition into globally competitive AI companies, and I see significant potential for long-term cross-border partnerships and investment.”

Designed to be more than a traditional conference, Startup Bazaar offers an immersive experience for startup founders, technologists, investors, policymakers, corporate innovation leaders, researchers, and professionals. Attendees will have the chance to engage directly with the U.S. delegation, which includes angel investors and AI experts.

A highlight of both events will be the Startup Showcase, where selected startups will pitch their ideas to potential investors. For founders seeking visibility, feedback, and funding opportunities, this showcase serves as a direct gateway to international markets.

As Startup Bazaar makes its debut in Abu Dhabi and Dubai, it not only fosters conversations about innovation but also brings together the people, capital, and ambition necessary to drive future advancements.

For those interested in attending, registration is now open for both the Abu Dhabi and Dubai editions of Startup Bazaar.

According to The American Bazaar, the series promises to be a significant event in the region’s tech landscape.

Dr. Satheesh Kathula Appointed Chair of Board of Directors, Indo-American Press Club

The Indo-American Press Club (IAPC), the largest and most influential organization representing journalists and media professionals of Indian origin across North America, has announced the appointment of Dr. Satheesh Kathula as Chair of its Board of Directors for 2026. A distinguished oncologist, community leader, and immediate past president of the American Association of Physicians of Indian Origin (AAPI), Dr. Kathula brings more than two decades of leadership and public service to this prominent role.

Dr. Kathula has served as a practicing oncologist for nearly 25 years, earning widespread respect for his compassionate care and contributions to the advancement of cancer treatment.

His association with IAPC spans many years. In 2005, he received the organization’s prestigious Leadership Award in recognition of his service and advocacy.

Accepting the new role, Dr. Kathula outlined a bold and forward-looking vision for the organization. “As the Chair of the Indo-American Press Club, I will champion ethical, evidence-based journalism, strengthen Indo–U.S. narratives, and elevate health and science reporting,” he said. Emphasizing modernization and broader engagement, he added, “My focus is on building bridges across cultures, modernizing our digital presence, and expanding our influence beyond ethnic media. With unity, integrity, and responsible innovation at the core, I aim to create a lasting legacy that empowers journalists, informs communities, and positions the Club as a trusted voice of impact.”

Reflecting on the challenges facing media professionals today, Dr. Kathula noted, “These are unprecedented times, especially for journalists and the media, when the very freedom of expression is at risk. At IAPC, we envisage our vision through collective efforts and advocacy activities through our nearly one thousand members across the U.S. and Canada, by being a link between the media fraternity and the world at large.”

Ginsmon Zachariah, Founding Chair of the IAPC Board of Directors, highlighted the broader mission of the organization. “Our homeland India is known to have a vibrant, active, and free media, which plays a vital role in the functioning of the world’s largest democracy,” he said. “As members of the media in our adopted land, we recognize our responsibility to be a source of effective communication. We have a role to play in shaping a just and equitable world where everyone enjoys freedom and liberty.”

Providing historical context, Ajay Ghosh, Founding President of IAPC, reflected on the organization’s origins. “We as individuals and corporations representing print, visual, electronic, and online media realized that we had a greater role to play,” he said. “For decades, many of us stood alone in a vast media landscape, our voices often drowned out. IAPC was formed to fill this vacuum—a common platform to raise our collective voice, pool our talents, and respond cohesively to the challenges of the modern world.”

A graduate of Siddhartha Medical College in Vijayawada, Andhra Pradesh, Dr. Kathula currently serves as a clinical professor of medicine at Wright State University’s Boonshoft School of Medicine in Dayton, Ohio. Dr. Kathula completed a Global Healthcare Leaders Program at Harvard University.He also holds a certificate in Artificial Intelligence in Healthcare from Stanford University and is a Diplomate of the American Board of Lifestyle Medicine.

He has authored several medical papers and published a book “Immigrant Doctors: Chasing the Big American Dream” highlighting the contribution of immigrant doctors, their struggles and triumphs. It is Amazon’s best selling. He embarked on his second book on cancer awareness for general public.

Dr. Kathula’s professional achievements extend far beyond medicine. Dr. Kathula’s commitment to community service is equally noteworthy. He has led bone marrow donor drives to address the severe shortage of South Asian donors and was named “Man of the Year – 2018” by the Leukemia and Lymphoma Society for raising funds to help fund the research to find newer treatments and cures for blood cancers.

His commitment to community service is equally noteworthy. His philanthropic work in India includes establishing the Pathfinder Institute of Pharmacy and Educational Research (PIPER) in Warangal, Telangana, which has already graduated more than 1,000 students. He has also supported medical camps and donated essential infrastructure—including a defibrillator, water purification system, CPR center and library—to his native community.He has also supported medical camps and donated essential infrastructure—including a defibrillator, water purification system, and library—to his native community.

Dr. Kathula has served AAPI in numerous leadership roles, including Regional Director, Trustee, Treasurer, Secretary, Vice President, and President-Elect before assuming the presidency in July 2024.

Dr. Kathula has received numerous honors, including the U.S. Presidential Lifetime Achievement Award. In December 2024, he was honored with the Inspirational Award by the Raising Awareness of Youth with Autism (RAYWA) Foundation at a gala held at New York’s iconic Pierre Hotel. In May 2025, IAPC itself bestowed upon him its Lifetime Achievement Award.

Founded in 2013, the Indo-American Press Club continues to serve as a unifying platform for journalists of Indian origin, fostering collaboration, professionalism, and a commitment to the public good. More information is available at www.indoamericanpressclub.com..

Tiny Autonomous Robots Achieve Independent Swimming Capability

Researchers have developed the smallest fully programmable autonomous robots capable of swimming, potentially transforming medicine and healthcare.

For decades, the concept of microscopic robots has largely existed in the realm of science fiction. Films like “Fantastic Voyage” fueled our imaginations, suggesting that tiny machines could one day navigate the human body to repair ailments from within. However, this vision remained elusive, primarily due to the constraints imposed by physics.

Now, a significant breakthrough from researchers at the University of Pennsylvania and the University of Michigan has altered this narrative. The teams have successfully created the smallest fully programmable autonomous robots to date, and these innovative machines can swim.

Measuring approximately 200 by 300 by 50 micrometers, these robots are smaller than a grain of salt and comparable in size to a single-celled organism. Unlike traditional robots that rely on legs or propellers for movement, these microscopic machines utilize electrokinetics. Each robot generates a small electrical field that attracts charged ions in the surrounding fluid, effectively creating a current that propels the robot forward without any moving parts. This design not only enhances durability but also simplifies handling with delicate laboratory tools.

Each robot is powered by tiny solar cells that produce just 75 nanowatts of energy—over 100,000 times less than what a smartwatch consumes. To achieve this level of efficiency, engineers had to redesign various components, including ultra-low voltage circuits and a custom instruction set that condenses complex behaviors into a few hundred bits of memory. Despite these limitations, each robot is capable of sensing its environment, storing data, and making decisions about its next movements.

Due to their size, the robots cannot accommodate antennas. Instead, the research team drew inspiration from nature, enabling each robot to perform a specific wiggle pattern to convey information, such as temperature. This motion follows a precise encoding scheme that researchers can interpret by observing the robots under a microscope. This method of communication is reminiscent of how bees convey messages through movement. Programming the robots is equally innovative; researchers use light signals that the robots interpret as instructions, with a built-in passcode to prevent interference from random light sources.

In current experiments, the robots exhibit thermotaxis, meaning they can sense heat and swim autonomously toward warmer areas. This capability suggests promising future applications, such as tracking inflammation, identifying disease markers, or delivering drugs with pinpoint accuracy. While light can already power these robots near the skin, researchers are also investigating ultrasound as a potential energy source for deeper environments.

Thanks to their construction using standard semiconductor manufacturing techniques, these robots can be produced en masse. More than 100 robots can fit on a single chip, and manufacturing yields have already surpassed 50%. In large-scale production, the estimated cost could drop below one cent per robot, making the concept of disposable robot swarms a tangible reality.

This technology is not merely about creating flashy gadgets; it represents a significant advancement in scalability. Robots of this size could one day monitor health at the cellular level, construct materials from the ground up, or explore environments that are too fragile for larger machines. Although practical medical applications are still years away, this breakthrough indicates that true autonomy at the microscale is finally within reach.

For nearly half a century, the promise of microscopic robots has felt like a dream that science could never fully realize. However, this research, published in Science Robotics, marks a pivotal shift. By embracing the unique physics of the microscale rather than resisting it, engineers have unlocked an entirely new class of machines. This is just the beginning, but it represents a significant leap forward. As sensing, movement, and decision-making capabilities are integrated into these nearly invisible robots, the future of robotics is poised to look remarkably different.

As we consider the potential of tiny robots swimming through our bodies, the question arises: would we trust them to monitor our health or deliver treatment? This inquiry invites further exploration into the future of healthcare technology.

According to Science Robotics, the implications of this research could extend far beyond initial expectations, paving the way for revolutionary advancements in medical science.

Google Uses AI to Decode Dolphin Communication

Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate human interaction with these intelligent marine mammals.

Google is embarking on an innovative project that harnesses artificial intelligence (AI) to explore the complexities of dolphin communication, with the ultimate aspiration of enabling humans to converse with these remarkable creatures.

Dolphins are widely recognized as some of the most intelligent animals on the planet, celebrated for their emotional depth and social interactions with humans. In collaboration with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP)—a Florida-based non-profit dedicated to studying dolphin sounds for over four decades—Google is developing a new AI model named DolphinGemma.

The Wild Dolphin Project has spent years correlating various dolphin sounds with specific behavioral contexts. For example, signature whistles are commonly used by mothers to locate their calves, while burst pulse “squawks” are often associated with aggressive encounters among dolphins. Additionally, “click” sounds are frequently observed during courtship or when dolphins are pursuing sharks.

Utilizing the extensive data collected by WDP, Google has constructed DolphinGemma, which builds upon its existing lightweight AI model known as Gemma. This new model is designed to analyze a vast library of dolphin recordings, identifying patterns, structures, and potential meanings behind the vocalizations of these marine mammals.

Over time, DolphinGemma aims to categorize dolphin sounds into distinct groups—similar to words, sentences, or expressions in human language. According to a blog post from Google, “By identifying recurring sound patterns, clusters, and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins’ natural communication—a task previously requiring immense human effort.”

The project envisions that these identified patterns, combined with synthetic sounds created by researchers to represent objects that dolphins enjoy interacting with, may eventually lead to the establishment of a shared vocabulary for interactive communication between humans and dolphins.

DolphinGemma employs audio recording technology from Google’s Pixel phones to capture high-quality sound recordings of dolphin vocalizations. This technology is adept at isolating dolphin clicks and whistles from background noise, such as waves, boat engines, or underwater static. Clean audio is essential for AI models like DolphinGemma, as noisy data can hinder the AI’s ability to learn effectively.

Google plans to release DolphinGemma as an open model this summer, making it accessible for researchers worldwide to utilize and adapt for their own studies. Although the model has been primarily trained on Atlantic spotted dolphins, researchers believe it could also be fine-tuned to study other species, such as bottlenose or spinner dolphins.

In a statement, Google expressed its hope that by providing tools like DolphinGemma, researchers globally will be empowered to analyze their own acoustic datasets, accelerate the search for patterns, and collectively enhance our understanding of these intelligent marine mammals.

As this groundbreaking project unfolds, the potential for deeper human-dolphin communication may soon become a reality, opening new avenues for interaction with one of the ocean’s most fascinating inhabitants, according to Fox News.

AI Robot Provides Emotional Support for Pets

Aura, an AI-powered pet robot by Tuya Smart, aims to enhance emotional care for pets by tracking their behavior and providing real-time interaction.

Tuya Smart has unveiled Aura, its first AI-powered companion robot designed specifically for household pets, including cats and dogs. This innovative device utilizes artificial intelligence to recognize pet behaviors, movements, and vocal cues, addressing a growing need for emotional engagement in pet care.

The concept behind Aura is straightforward: pets require more than just food and surveillance; they need attention, interaction, and reassurance. Aura actively monitors pets at home, observing behavioral changes and responding in real time, which helps owners gain insights into their pets’ emotional states. Many pets experience stress or anxiety when left alone for extended periods, with subtle signs often emerging first. For instance, a dog may stop playing, while a cat might hide or groom excessively. Aura steps in during these quiet moments, providing engagement and companionship rather than leaving pets in an empty room.

While traditional smart feeders and pet cameras cover basic needs, emotional care presents a different challenge. Pets are inherently social creatures, and their moods can shift rapidly with changes in routine. Aura tracks behavior and listens for variations in sound patterns, allowing it to discern whether a pet is feeling excited, anxious, lonely, or relaxed. This information is relayed to the owner’s smartphone in real time, enabling early detection of potential issues.

Aura functions more like a companion than a stationary device. It employs multiple systems throughout the day to keep pets engaged. Rather than waiting for a button press, Aura proactively seeks opportunities for interaction, transforming long, quiet hours into moments of play and stimulation. Additionally, it captures everyday highlights—such as playful bursts, calm naps, and amusing interactions—using AI pet recognition and intelligent tracking. These moments can be automatically compiled into short videos, allowing owners to stay connected with their pets even when they are away. This feature also makes it easier to document and share special moments with family or on social media.

Movement is a key aspect of Aura’s functionality. Equipped with V-SLAM navigation, binocular vision, and AIVI object recognition, Aura can navigate freely around the home while avoiding obstacles. When its battery runs low, it autonomously returns to its charging dock, ensuring it remains ready for action without requiring constant attention from owners.

Aura is designed to integrate with Tuya’s broader ecosystem, which offers services beyond basic pet care. These services include smart pet boarding, health and medical care, behavior training, grooming, customization, and community tools. Rather than focusing on a single task, Aura serves as a central hub for comprehensive pet care that can evolve over time.

While Aura currently targets pet care, the underlying technology has broader implications. The principles of emotional awareness, proactive assistance, and ecosystem integration could also be applied to elder care, home monitoring, and family connectivity. By starting with pets, Tuya establishes a clear emotional use case while laying the groundwork for future advancements in home robotics.

Despite the excitement surrounding Aura, Tuya has yet to announce a release date or pricing details. The company introduced the robot earlier this month at CES 2026, but specifics regarding availability and cost remain unclear. These details are expected to emerge as the company approaches a wider consumer launch.

Aura represents a significant shift in how smart home technology interacts with pets, moving beyond simple monitoring to embrace interaction and emotional awareness. If Aura fulfills its promise, it could provide pet owners with greater peace of mind when leaving their pets home alone, while maintaining a connection throughout the day.

As technology advances to interpret and respond to pet emotions in real time, it raises questions about the role of such devices in our daily routines. Would you trust an AI companion to become part of your pet care regimen, or would that feel like an overstep? Share your thoughts with us at Cyberguy.com.

According to CyberGuy, the future of pet care is evolving with technology that prioritizes emotional well-being.

Google Fast Pair Vulnerability Allows Hackers to Take Control of Headphones

Google has responded to serious security flaws in its Fast Pair technology, which could allow hackers to hijack Bluetooth headphones and other devices, by issuing patches and updating certification requirements.

Google’s Fast Pair technology, designed to simplify Bluetooth connections, is facing significant security vulnerabilities that could allow unauthorized access to headphones, earbuds, and speakers. Researchers from KU Leuven have identified these flaws, which they have dubbed “WhisperPair.” This method enables nearby attackers to connect to devices without the owner’s knowledge, raising serious privacy concerns.

One of the most alarming aspects of this vulnerability is that it affects not only Android users but also iPhone users. Fast Pair operates by broadcasting a device’s identity to nearby phones and computers, facilitating quick connections. However, the researchers discovered that many devices fail to enforce a critical rule: they continue to accept new pairings even when already connected. This oversight creates an opportunity for malicious actors.

Within Bluetooth range, an attacker can silently pair with a device in approximately 10 to 15 seconds. Once connected, they can disrupt calls, inject audio, or even activate the device’s microphone. Notably, this attack can be executed using standard devices such as smartphones, laptops, or low-cost hardware like Raspberry Pi, allowing the attacker to effectively assume control of the device.

The researchers tested 17 Fast Pair-compatible devices from well-known brands, including Sony, Jabra, JBL, Marshall, Xiaomi, Nothing, OnePlus, Soundcore, Logitech, and Google. Alarmingly, most of these products had passed Google’s certification testing, raising concerns about the efficacy of the security checks in place.

Some affected models pose an even greater privacy risk. Certain Google and Sony devices integrate with Find Hub, a feature that uses nearby devices to estimate location. If an attacker connects to a headset that has never been linked to a Google account, they can continuously track the user’s movements. If the victim later receives a tracking alert, it may appear to reference their own device, making it easy to dismiss as an error.

Another issue that many users may overlook is the necessity of firmware updates for headphones and speakers. These updates typically come through brand-specific apps that many users do not install. Consequently, vulnerable devices could remain exposed for extended periods if users do not take action.

The only way to mitigate this vulnerability is by installing a software update provided by the device manufacturer. While many companies have already released patches, updates may not yet be available for every affected model. Users are advised to check directly with their manufacturers to confirm whether a security update exists for their specific device.

Importantly, the flaw does not lie within Bluetooth itself but rather within the convenience layer built on top of it. Fast Pair prioritized speed over strict ownership enforcement, which researchers argue should require cryptographic proof of ownership. Without such measures, convenience features can become potential attack surfaces. Security and ease of use can coexist, but they must be designed in tandem.

In response to these vulnerabilities, Google has been collaborating with researchers to address the WhisperPair flaws. The company began distributing recommended patches to headphone manufacturers in early September and confirmed that its own Pixel headphones have been updated.

A Google spokesperson stated, “We appreciate collaborating with security researchers through our Vulnerability Rewards Program, which helps keep our users safe. We worked with these researchers to fix these vulnerabilities, and we have not seen evidence of any exploitation outside of this report’s lab setting. As a best security practice, we recommend users check their headphones for the latest firmware updates. We are constantly evaluating and enhancing Fast Pair and Find Hub security.”

Google has indicated that the core issue stemmed from some accessory manufacturers not fully adhering to the Fast Pair specification, which requires devices to accept pairing requests only when a user has intentionally placed the device into pairing mode. Failures to enforce this rule contributed to the audio and microphone risks identified by researchers.

To mitigate future risks, Google has updated its Fast Pair Validator and certification requirements to explicitly test whether devices properly enforce pairing mode checks. The company has also provided accessory partners with fixes intended to resolve all related issues once applied.

On the location tracking front, Google has implemented a server-side fix that prevents accessories from being silently enrolled into the Find Hub network if they have never been paired with an Android device. This change addresses the tracking risk across all devices, including Google’s own accessories.

Despite these efforts, researchers have expressed concerns about the speed at which patches reach users and the extent of Google’s visibility into real-world exploitation that does not involve Google hardware. They argue that weaknesses in certification allowed flawed implementations to reach the market at scale, indicating broader systemic issues.

For now, both Google and the researchers agree on one crucial point: users must install manufacturer firmware updates to ensure protection, and the availability of these updates may vary by device and brand.

While users cannot entirely disable Fast Pair, they can take steps to reduce their exposure. If you use a Bluetooth accessory that supports Google Fast Pair, including wireless earbuds, headphones, or speakers, you may be affected. Researchers have developed a public lookup tool that allows users to check whether their specific device model is vulnerable. This tool can be accessed at whisperpair.eu/vulnerable-devices.

To enhance security, users are encouraged to install the official app from their headphone or speaker manufacturer, check for firmware updates, and apply them promptly. Pairing new devices in private spaces and being cautious of unexpected audio interruptions or strange sounds can also help mitigate risks. A factory reset can remove unauthorized pairings, but it does not resolve the underlying vulnerability; a firmware update is still necessary.

Bluetooth should only be active during use, and turning it off when not in use can limit exposure, although it does not eliminate the risk if the device remains unpatched. Always factory reset used headphones or speakers before pairing them to remove hidden links and account associations. Additionally, promptly installing operating system updates can block exploit paths even when accessory updates lag behind.

The WhisperPair vulnerabilities highlight how small conveniences can lead to significant privacy failures. While headphones may seem innocuous, they contain microphones, radios, and software that require regular attention and updates. Neglecting these devices can create blind spots that attackers are eager to exploit. Staying secure now necessitates a proactive approach to devices that users may have previously taken for granted.

For further information and updates, users can refer to CyberGuy.

Smart Pill Technology Confirms When Medication Is Swallowed

The Massachusetts Institute of Technology has developed a smart pill that confirms medication ingestion, potentially improving patient adherence and health outcomes while safely breaking down in the body.

Engineers at the Massachusetts Institute of Technology (MIT) have designed an innovative smart pill that confirms when a patient has swallowed their medication. This advancement aims to enhance treatment tracking for healthcare providers and help patients adhere to their medication schedules, ultimately reducing the risk of missed doses that can jeopardize health.

The smart pill incorporates a tiny, biodegradable radio-frequency antenna made from zinc and cellulose, materials that are already established as safe for medical use. This system fits within existing pill capsules and operates by emitting a signal that can be detected by an external receiver, potentially integrated into a wearable device, from a distance of up to two feet.

This entire process occurs within approximately ten minutes after ingestion. Unlike previous smart pill designs that utilized components that remained intact throughout the digestive system, raising concerns about long-term safety, the MIT team has taken a different approach. Most parts of the antenna decompose in the stomach within days, leaving only a small off-the-shelf RF chip that naturally passes through the body.

Lead researcher Mehmet Girayhan Say emphasized the goal of the project: to provide a reliable confirmation of medication ingestion without the risk of long-term buildup in the body.

This smart pill is not intended for every type of medication but is specifically designed for situations where missing a dose can have serious consequences. Potential beneficiaries include patients who have undergone organ transplants, those managing tuberculosis, and individuals with complex neurological conditions. For these patients, adherence to prescribed medication can be the difference between recovery and severe complications.

Senior author Giovanni Traverso highlighted that the primary focus of this technology is on patient health. The aim is to support individuals rather than monitor them. The research team has published its findings in the journal Nature Communications and is planning further preclinical testing, with human trials expected to follow as the technology progresses toward real-world application.

This research has received funding from several sources, including Novo Nordisk, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital Division of Gastroenterology, and the U.S. Advanced Research Projects Agency for Health.

Missed medication doses contribute to hundreds of thousands of preventable deaths annually and add billions of dollars to healthcare costs. This issue is particularly critical for patients who require consistent treatment over extended periods. For individuals in vulnerable health situations, such as organ transplant recipients or those with chronic illnesses, the implications of missed doses can be life-altering.

While the smart pill technology is still in development, it offers the potential to provide an additional layer of safety for patients relying on critical medications. It could alleviate some of the pressures faced by patients managing complex treatment plans and reduce uncertainty for healthcare providers regarding patient adherence.

However, the introduction of such technology also raises important questions about privacy, consent, and the sharing of medical data. Any future implementation will need robust safeguards to protect patient information.

For those awaiting the availability of this technology, there are still effective ways to stay on track with medication regimens. Utilizing built-in tools on smartphones can help individuals manage their medication schedules effectively.

The concept of a pill that confirms ingestion may seem futuristic, but it addresses a pressing issue in healthcare. By combining simple materials with innovative engineering, MIT researchers have created a tool that could potentially save lives without leaving harmful residues in the body. As testing continues, this approach could significantly reshape the monitoring and delivery of medical treatments.

Would you be comfortable taking a pill that reports when you swallow it if it meant better health outcomes? Share your thoughts with us at Cyberguy.com.

According to MIT, this groundbreaking technology could transform medication adherence and patient care.

Potential Discovery of New Dwarf Planet Challenges Planet Nine Theory

The potential discovery of a new dwarf planet, 2017OF201, may provide fresh insights into the elusive Planet Nine theory and the structure of the Kuiper Belt.

A team of scientists at the Institute for Advanced Study’s School of Natural Sciences in Princeton, New Jersey, has announced the potential discovery of a new dwarf planet, which could lend support to the theory of a theoretical super-planet known as Planet Nine.

The object, designated 2017OF201, is classified as a trans-Neptune object (TNO), which refers to minor planets that orbit the Sun at distances greater than that of Neptune. Located on the fringes of our solar system, 2017OF201 stands out due to its significant size and unusual orbital characteristics.

Led by researchers Sihao Cheng, Jiaxuan Li, and Eritas Yang from Princeton University, the team utilized advanced computational methods to track the object’s distinctive trajectory in the night sky. Cheng noted that the aphelion, or the farthest point in the orbit from the Sun, of 2017OF201 is more than 1,600 times that of Earth’s orbit. In contrast, its perihelion, the closest point to the Sun, is 44.5 times that of Earth’s orbit, a pattern reminiscent of Pluto’s orbit.

2017OF201 takes approximately 25,000 years to complete a single orbit around the Sun. Yang suggested that the object likely experienced close encounters with a giant planet, which may have resulted in its ejection to a wide orbit. Cheng elaborated on this idea, proposing that the object might have initially been expelled to the Oort Cloud, the most distant region of our solar system, before being drawn back toward the Sun.

This discovery has important implications for our understanding of the outer solar system’s structure. In January 2016, astronomers Konstantin Batygin and Mike Brown from the California Institute of Technology (Caltech) presented research suggesting the existence of a planet approximately 1.5 times the size of Earth, located in the outer solar system. However, the existence of this so-called Planet Nine remains theoretical, as neither Batygin nor Brown has directly observed the planet.

According to the theory, Planet Nine is thought to be roughly the size of Neptune and located far beyond Pluto, in the vicinity of the Kuiper Belt, where 2017OF201 was discovered. If it exists, Planet Nine could possess a mass up to ten times that of Earth and orbit the Sun from a distance up to 30 times greater than that of Neptune. It is estimated that this hypothetical planet would take between 10,000 and 20,000 Earth years to complete one full orbit around the Sun.

Previously, the region beyond the Kuiper Belt was believed to be largely empty. However, the discovery of 2017OF201 suggests that this area may be more populated than previously thought. Cheng remarked that only about 1% of 2017OF201’s orbit is currently visible to astronomers.

“Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system,” Cheng stated in the announcement.

Nasa has indicated that if Planet Nine does exist, it could help explain the peculiar orbits of certain smaller objects within the distant Kuiper Belt. As it stands, the existence of Planet Nine remains largely theoretical, with its potential presence inferred from gravitational patterns observed in the outer solar system.

This latest discovery underscores the ongoing quest to understand the complexities of our solar system and the potential for finding new celestial bodies that may reshape our understanding of its structure.

According to Fox News, the implications of 2017OF201’s discovery could be significant for future research into the outer solar system.

Meta Limits Teen Access to AI Characters for Safety Reasons

Meta Platforms will temporarily restrict access to AI characters for teenagers as it develops a new, age-appropriate version that includes parental controls and adheres to PG-13 content guidelines.

Meta Platforms announced on Friday that it will suspend access to its AI characters for teenagers across all its applications globally. This decision comes as the company works on a revised version of the feature tailored specifically for younger users.

The initiative reflects Meta’s commitment to refining the interaction between its AI products and teenage users amid increasing scrutiny regarding safety, age-appropriate design, and the implications of generative AI on social media platforms.

“Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready,” Meta stated.

Once the revamped AI characters are launched, they will incorporate parental controls, allowing families greater oversight of how younger users engage with the technology. This move follows a preview of these controls released in October, where Meta indicated that parents would have the option to disable private chats between their teens and AI characters. This response was prompted by growing concerns over reports of flirtatious interactions between chatbots and minors on its platforms.

Despite the announcement, Meta clarified that these parental controls are not yet operational. Additionally, the company has committed to ensuring that its AI experiences for teenagers adhere to the PG-13 movie rating framework, aiming to restrict exposure to content considered inappropriate for minors.

The changes come at a time when U.S. regulators are intensifying their examination of AI companies and the potential risks associated with chatbots. In August, reports indicated that Meta’s internal AI guidelines had permitted provocative conversations involving minors, further amplifying the pressure on the company to enhance its safety measures.

As the landscape of AI technology continues to evolve, Meta’s proactive approach aims to address the concerns of parents and regulators alike, ensuring a safer online environment for younger users.

The post Meta to block teen access to AI characters appeared first on The American Bazaar, according to The American Bazaar.

Ransomware Attack Exposes Social Security Numbers at Major Gas Station Chain

A recent ransomware attack on a Texas gas station chain has exposed the personal information of over 377,000 individuals, raising concerns about data security in the retail sector.

A ransomware attack on a Texas-based gas station chain has resulted in the exposure of sensitive personal data for more than 377,000 individuals, including Social Security numbers and driver’s license information. This incident underscores the vulnerabilities that exist in industries that handle large volumes of personal data but may lack robust cybersecurity measures.

The breach was reported by Gulshan Management Services, Inc., which is affiliated with Gulshan Enterprises, the operator of approximately 150 Handi Plus and Handi Stop gas stations and convenience stores throughout Texas. According to a disclosure filed with the Maine Attorney General’s Office, the company detected unauthorized access to its IT systems in late September.

Investigators later discovered that the attackers had infiltrated the network for about ten days before the breach was identified. The intrusion began with a phishing attack, highlighting the risks associated with deceptive emails that can lead to significant data breaches.

During this period, the attackers accessed and stole a range of personal information, subsequently deploying ransomware that encrypted files across Gulshan’s systems. The compromised data includes names, contact details, Social Security numbers, and driver’s license numbers, all of which pose serious risks for identity theft and fraud that may manifest long after the breach.

As of now, no ransomware group has publicly claimed responsibility for the attack. While this may seem like a silver lining, it does not alleviate the risks for those affected. In many ransomware incidents, the absence of a claim can indicate that the attackers have not yet released the stolen data publicly or that the victim company has resolved the situation privately.

Gulshan’s filing indicates that the company restored its systems using known-safe backups, suggesting that it opted to rebuild rather than negotiate with the attackers. However, once sensitive data has been extracted from a network, it cannot be retracted, leaving affected individuals at risk regardless of whether the stolen information appears online.

This incident highlights a recurring issue within the retail and service sectors, where businesses often rely on outdated systems and employees who may be vulnerable to phishing attacks. Although gas stations may not seem like obvious targets for cybercriminals, their payment systems, loyalty programs, and human resources databases make them attractive for data breaches.

In light of this breach, individuals whose information may have been compromised should take proactive steps to mitigate potential fallout. If the company offers free credit monitoring or identity protection services, it is advisable to enroll in those programs. Such services can provide early alerts if someone attempts to open accounts or misuse personal information.

If no such services are offered, individuals should consider signing up for a reputable identity theft protection service independently. These services can monitor personal information, such as Social Security numbers and email addresses, and alert users if their data is being sold on the dark web or used to open accounts fraudulently.

Additionally, employing a password manager can help create and store unique passwords for each account, further securing personal information against unauthorized access. Users should also check if their email addresses have been involved in past data breaches and change any reused passwords immediately if they find a match.

Implementing two-factor authentication (2FA) adds another layer of security, particularly for email, banking, and shopping accounts, which are often primary targets for cybercriminals. Furthermore, maintaining strong antivirus software can help detect phishing attempts and suspicious activity before they escalate into significant breaches.

After incidents like this, scammers frequently exploit the situation by sending fake emails or texts impersonating the affected company or credit monitoring services. It is crucial to verify any messages independently and avoid clicking on unexpected links.

Individuals should regularly review their credit reports from major bureaus for unfamiliar accounts or inquiries. They are entitled to free reports, and early detection of issues can facilitate easier resolutions.

If a Social Security number has been compromised, placing a credit freeze can prevent lenders from opening new accounts in the victim’s name, even if they possess personal details. Credit bureaus provide this service at no charge, and it can be temporarily lifted when applying for credit. Alternatively, individuals may opt for a fraud alert, which requires lenders to verify identity before approving credit.

Moreover, when Social Security numbers are stolen, tax fraud often follows, as criminals can file fake tax returns to claim refunds. An IRS Identity Protection PIN (IP PIN) can help prevent this by ensuring that only the rightful owner can file a tax return using their SSN.

It is essential to not only monitor for new fraud but also to secure existing accounts. Setting up alerts for large transactions or changes to contact information can help detect unauthorized activity early. If personal information has been compromised, contacting banks for additional protections is advisable.

This incident serves as a stark reminder that personal data is not only held by banks and healthcare providers but also by retailers and service operators. As cybercriminals exploit vulnerabilities through simple phishing emails, the potential for widespread damage increases significantly. While individuals cannot prevent such breaches, they can take steps to limit the impact of stolen data by securing their accounts and remaining vigilant.

For more information on how to protect yourself from identity theft and data breaches, visit Cyberguy.com.

Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a face-mounted electronic tattoo, or “e-tattoo,” to monitor mental workload in high-stress professions, utilizing EEG and EOG technology for brain activity analysis.

Scientists have introduced an innovative solution designed to help individuals in high-pressure work environments monitor their cognitive performance. This new device, known as an electronic tattoo or “e-tattoo,” is applied to the forehead and is intended to track brainwaves and mental workload.

A study published in the journal Device outlines the advantages of e-tattoos as a cost-effective and user-friendly method for assessing mental workload. Dr. Nanshu Lu, the senior author of the research from the University of Texas at Austin, emphasized that mental workload is a critical component in human-in-the-loop systems, significantly affecting cognitive performance and decision-making.

In an email to Fox News Digital, Dr. Lu noted that the motivation behind this device stems from the needs of professionals in high-demand, high-stakes jobs, including pilots, air traffic controllers, doctors, and emergency dispatchers. The technology could also benefit emergency room doctors and operators of robots or drones, enhancing both training and performance.

One of the primary objectives of the study was to develop a method for measuring cognitive fatigue in roles that require intense mental focus. The e-tattoo is designed to be temporarily affixed to the forehead and is notably smaller than existing devices on the market.

The device operates by employing electroencephalogram (EEG) and electrooculogram (EOG) technologies to monitor brain waves and eye movements. Traditional EEG and EOG machines tend to be bulky and expensive; however, the e-tattoo presents a compact and affordable alternative.

Dr. Lu explained, “We propose a wireless forehead EEG and EOG sensor designed to be as thin and conformable to the skin as a temporary tattoo sticker, which is referred to as a forehead e-tattoo.” She further noted that understanding human mental workload is essential in the fields of human-machine interaction and ergonomics due to its direct impact on cognitive performance.

The study involved six participants who were tasked with identifying letters displayed on a screen. The letters appeared one at a time in various locations, and participants were instructed to click a mouse if either the letter or its position matched one shown previously. Each participant completed the task multiple times, with varying levels of difficulty.

The researchers observed that as the tasks increased in complexity, the brainwave patterns detected by the e-tattoo indicated a corresponding rise in mental workload. The device comprises a battery pack, reusable chips, and a disposable sensor, making it both practical and efficient for use in cognitive assessments.

Currently, the e-tattoo exists as a laboratory prototype. Dr. Lu mentioned that further development is necessary before it can be commercialized, including the implementation of real-time mental workload decoding and validation in more realistic settings. The prototype is estimated to cost around $200.

This groundbreaking research highlights the potential for e-tattoos to revolutionize how professionals in high-stress jobs monitor their cognitive health and performance, paving the way for advancements in training and operational efficiency.

According to Fox News, the development of this technology could significantly impact various fields by providing a more accessible means of tracking mental workload and cognitive fatigue.

Web Skimming Attacks Target Major Payment Networks and Consumers

Researchers are tracking a persistent web skimming campaign that targets major payment networks, using malicious JavaScript to steal credit card information from unsuspecting online shoppers.

As online shopping becomes increasingly familiar and convenient, a hidden threat lurks beneath the surface. Researchers are monitoring a long-running web skimming campaign that specifically targets businesses connected to major payment networks. This technique enables criminals to secretly insert malicious code into checkout pages, allowing them to capture payment details as customers enter them. Often, these attacks operate unnoticed within the browser, leaving victims unaware until unauthorized charges appear on their statements.

The term “Magecart” refers to various groups that specialize in web skimming attacks. These attacks primarily focus on online stores where customers input payment information during the checkout process. Rather than directly hacking banks or card networks, attackers embed malicious code into a retailer’s checkout page. This code, typically written in JavaScript, is a standard programming language used to enhance website interactivity, such as managing forms and processing payments.

In Magecart attacks, criminals exploit this same JavaScript to covertly capture card numbers, expiration dates, security codes, and billing details as shoppers input their information. The checkout process continues to function normally, providing no immediate warning signs to users. Initially, Magecart referred specifically to attacks on Magento-based online stores, but the term has since expanded to encompass web skimming campaigns across various e-commerce platforms and payment systems.

Researchers indicate that this ongoing campaign targets merchants linked to several major payment networks. Large enterprises that depend on these payment providers face heightened risks due to their complex websites and reliance on third-party integrations. Attackers typically exploit overlooked vulnerabilities, such as outdated plugins, vulnerable third-party scripts, and unpatched content management systems. Once they gain access, they inject JavaScript directly into the checkout flow, allowing the skimmer to monitor form fields associated with card data and personal information. This data is then quietly transmitted to servers controlled by the attackers.

To evade detection, the malicious JavaScript is often heavily obfuscated. Some variants can even remove themselves if they detect an admin session, creating a false impression of a clean inspection. Researchers have also noted that the campaign utilizes bulletproof hosting services, which ignore abuse reports and takedown requests, providing attackers with a stable environment to operate. Because web skimmers function within the browser, they can circumvent many server-side fraud controls employed by merchants and payment providers.

Magecart campaigns simultaneously impact three groups: the online retailers, the customers, and the payment networks. This shared vulnerability complicates detection and response efforts.

While consumers cannot rectify compromised checkout pages, adopting a few smart habits can help mitigate exposure, limit the misuse of stolen data, and facilitate quicker detection of fraud. One effective strategy is to use virtual and single-use cards, which are digital card numbers linked to a real credit or debit account without revealing the actual number. These cards function like standard cards during checkout but provide an additional layer of security. Many people can access these services through their existing banking apps or mobile wallets, such as Apple Pay and Google Pay, which generate temporary card numbers for online transactions.

A single-use card typically works for one purchase or expires shortly after use, while a virtual card can remain active for a specific merchant and be paused or deleted later. If a web skimming attack captures one of these numbers, attackers are generally unable to reuse it elsewhere, significantly limiting financial damage and making it easier to halt fraud.

Transaction alerts can notify users the moment their card is used, even for minor purchases. If web skimming leads to fraudulent activity, these alerts can quickly reveal unauthorized charges, allowing cardholders to freeze their accounts before losses escalate. For instance, a small test charge of $2 could indicate fraud before larger transactions occur.

Using strong, unique passwords for banking and card portals can also reduce the risk of account takeovers. A password manager can assist in generating and securely storing these credentials. Additionally, individuals should check if their email addresses have been compromised in past data breaches. Many password managers include built-in breach scanners that alert users if their information appears in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Robust antivirus software can block connections to malicious domains used to collect skimmed data and alert users about unsafe websites. This protection is essential for safeguarding personal information and digital assets from potential threats, including phishing emails and ransomware scams.

Data removal services can also help minimize the amount of personal information exposed online, making it more challenging for criminals to match stolen card data with complete identity details. While no service can guarantee complete data removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and reducing the risk of targeted attacks.

Regularly reviewing financial statements, even for small charges, is another prudent practice, as attackers often test stolen cards with low-value transactions. The Magecart web skimming campaign illustrates how attackers can exploit trusted checkout pages without disrupting the shopping experience. Although consumers cannot fix compromised sites, implementing simple safeguards can help reduce risk and facilitate early detection of fraud. Online payments rely on trust, but this campaign underscores the importance of pairing that trust with caution.

As awareness of web skimming grows, consumers may find themselves reconsidering the safety of online checkout processes. For further information and resources on protecting against these threats, visit CyberGuy.com.

Indian-American CEO Vasudha Badri-Paul Launches AI Accelerator in East Bay

Vasudha Badri-Paul, founder and CEO of Avatara AI, discusses her transition from corporate life to launching an AI accelerator aimed at fostering innovation in California’s East Bay.

Vasudha Badri-Paul, the founder and CEO of Avatara AI, has embarked on an ambitious journey to reshape the landscape of artificial intelligence startups in California’s East Bay. After a lengthy corporate career, she is now focused on building an AI accelerator that aims to nurture the next generation of innovators.

In 2023, Badri-Paul established Avatara AI, a San Francisco-based firm dedicated to helping businesses design and manage AI solutions. She recognized the urgent need for companies to adapt to the rapidly evolving AI landscape. “AI is advancing at such a rapid pace that failing to continuously update your skills can leave you obsolete almost overnight,” she noted.

However, her decision to leave a stable corporate career was also influenced by the Bay Area’s unpredictable hiring environment. “I would say that the job lifespan in the Bay Area is two years, and it’s the same across sectors—corporate, tech, marketing, sales, everywhere,” she explained. With experience at major corporations like Pfizer, Microsoft, GE, Cisco, and Intel, Badri-Paul has witnessed firsthand the constant churn in the job market.

She elaborated on the challenges of this cycle, stating, “There is a constant churn. Reasons range from no funding to restructuring, and people are asked to leave every few years. This recurring cycle in the Bay Area job market that results in redundancies gets tiring after a while. Everyone is watching their back; there is no margin for humanity.”

Frustrated by this instability, Badri-Paul decided to take a bold step: “I took a hard stance and thought of building a company of my own.” As an early innovator in the AI space, she recognized the transformative potential of AI across various sectors. At Avatara, she oversees the development and deployment of AI solutions, focusing on responsible and ethical practices.

In addition to her work at Avatara, Badri-Paul is enthusiastic about the opportunities emerging in the East Bay region. She recently launched the Velocity East Accelerator, which she envisions as a catalyst for the future of AI in the area. “In California, Silicon Valley is where all the tech happens. It is the start-up empire. Despite this boom, some parts of Silicon Valley remain underrepresented, and we have been seeing a shift in the trend,” she stated.

Badri-Paul believes that the East Bay is on the verge of significant growth. “East Bay has kind of taken off,” she remarked. Through Velocity East, she aims to create a hub for innovation and entrepreneurship. As a long-time California resident, she has observed how migration patterns have spurred development in the region. “During Covid, a builder built about 20,000 homes in East Bay. A lot of migration happened during that time,” she noted.

Despite the influx of new residents, Badri-Paul observed a lack of formal support for startups in the area. “While there is a boom in newer residents, there was no formal atmosphere to nurture startups in the area, no Y Combinators—basically no ecosystem to help build ideas,” she explained.

With this vision in mind, she launched Velocity East, an AI accelerator based in San Ramon. Badri-Paul emphasized that the goal of the accelerator is not to replicate existing tech programs but to highlight the potential for groundbreaking AI companies to emerge from the East Bay. “We are talking about areas such as Fremont, Concord, as well as across Alameda and Contra Costa counties,” she said.

Velocity East is powered by The AI Foundry community and aims to accelerate early-stage AI startups through mentorship, resources, and access to capital. Badri-Paul added, “We also build bridges between East Bay innovators and the broader Bay Area ecosystem and create pathways for underrepresented founders to lead in AI.”

Her larger vision is to establish San Ramon and Bishop Ranch as legitimate hubs for AI innovation, shining a spotlight on the East Bay as a vital player in the tech landscape.

As Badri-Paul continues to navigate her entrepreneurial journey, she remains committed to fostering an environment where innovation can thrive, ensuring that the East Bay is recognized as a key contributor to the future of artificial intelligence.

According to The American Bazaar, Badri-Paul’s efforts represent a significant shift in the tech ecosystem, highlighting the importance of nurturing local talent and ideas.

Rising Data Center Growth May Lead to Increased Electricity Costs

A new study reveals that the rapid growth of data centers could significantly increase electricity costs and strain power grids, posing environmental challenges.

A recent study conducted by the Union of Concerned Scientists highlights the potential consequences of the rapid construction of data centers, warning that this surge in demand for electricity could lead to soaring energy costs and environmental harm.

Published on Monday, the report indicates that the pace at which data centers are being built is outstripping the ability of utilities to supply adequate electricity. Mike Jacobs, a senior manager of energy at the organization, emphasized the challenge: “They’re increasing the demand faster than you can increase the supply. How’re you going to do that?”

The report, titled “Data Center Power Play,” models various electricity demand scenarios over the next 25 years, alongside different energy policy approaches to meet these demands. The study aims to estimate the potential costs in terms of electricity, climate impact, and public health, which could amount to trillions of dollars.

Jacobs noted that implementing clean energy policies could mitigate these costs while reducing air pollution and health impacts. He pointed out that the construction of an electric grid capable of meeting the rising demand for power will take significantly longer than building new data centers.

“This is a collision between the people whose philosophy is ‘move fast and break things,’ with the utility industry that has nobody that says move fast and break things,” Jacobs remarked, referring to the rapid expansion of data center facilities. He also mentioned that predicting future demand for data centers is challenging due to limited information from utilities and major tech companies. How this demand is addressed will be crucial for both public health and environmental sustainability.

Jacobs further stated, “This is really a great moment for regulators to do what’s within their authority and sort out and assign the costs to those who cause them, which is an essential principle of utility ratemaking.”

In recent years, tech companies have aggressively expanded their data center operations, driven by the booming demand for artificial intelligence. Major firms such as OpenAI, Google, Meta, and Amazon have made substantial investments in data centers, with projects like Stargate serving as critical infrastructure for AI development.

While the growth of data centers brings job opportunities and digital advancements, it also raises significant concerns regarding their substantial energy and water consumption. Data centers typically rely on water-intensive cooling systems, which can exacerbate existing water scarcity issues.

For instance, a single 100 megawatt (MW) data center can consume over two million liters of water daily, an amount comparable to the daily usage of approximately 6,500 households. This demand is particularly concerning in regions already facing water shortages, such as parts of Georgia, Texas, Arizona, and Oregon, where it places additional stress on aquifers and municipal water supplies.

The findings of this study underscore the urgent need for a balanced approach to energy policy and infrastructure development, ensuring that the growing demands of data centers do not come at the expense of environmental sustainability and public health, according to The Union of Concerned Scientists.

U.S. Supports India-Singapore Submarine Cable Project for Enhanced Connectivity

The U.S. Trade and Development Agency has announced support for a submarine cable project linking India and Singapore, aimed at enhancing connectivity and security in Southeast Asia.

WASHINGTON, DC – On January 20, the U.S. Trade and Development Agency (USTDA) announced its backing for a proposed submarine cable system that will connect India with Singapore and key data hubs across Southeast Asia.

The planned cable route is set to link Chennai, India, with Singapore, while additional landing points are under consideration in Malaysia, Thailand, and Indonesia, according to USTDA.

As part of this initiative, USTDA has signed an agreement with SubConnex Malaysia Sdn. Bhd. to fund a feasibility study for the SCNX3 submarine cable system. This project is expected to serve approximately 1.85 billion people by enhancing digital infrastructure in the region.

The feasibility study aims to attract investment for the cable system and expand the capacity necessary for Artificial Intelligence and cloud-based services. USTDA emphasized that this effort will also help ensure the reliability and security of international networks while minimizing exposure to cyber threats and foreign interference.

The agreement was formalized during the Pacific Telecommunications Council 26 conference held in Honolulu, Hawaii.

SubConnex has appointed Florida-based APTelecom LLC to conduct the feasibility study. The study will encompass various aspects, including route design, engineering, financial modeling, commercialization planning, and regulatory analysis.

The SCNX3 submarine cable is designed to address the increasing connectivity challenges faced by India and Southeast Asia. USTDA noted that the rising demand for digital services, coupled with limited route diversity, has rendered existing networks susceptible to outages and security vulnerabilities.

By introducing new and resilient data pathways, the project is anticipated to enhance digital access and support the growth of Artificial Intelligence and cloud services. USTDA stated that the cable will provide a secure and reliable communications infrastructure for governments, businesses, and citizens throughout South and Southeast Asia.

Furthermore, USTDA highlighted that the feasibility study will promote the use of secure cable technology, safeguarding data flows from potential malicious foreign influences. This concern is increasingly relevant as undersea cables facilitate the majority of global internet and data traffic.

According to IANS, the initiative represents a significant step toward improving digital connectivity in the region.

Dialog Aims to Strengthen Ethical Canada-India AI Collaboration

India and Canada strengthen their partnership in artificial intelligence through the ‘India-Canada AI Dialogue 2026,’ focusing on ethical and inclusive AI development.

TORONTO — The Consulate General of India in Toronto recently hosted the ‘India-Canada AI Dialogue 2026,’ highlighting India’s pivotal role in fostering inclusive, responsible, and impactful artificial intelligence (AI). This event underscored the importance of bilateral cooperation for mutual economic and societal benefits.

Organized in collaboration with the University of Waterloo, the Canada India Tech Council, and Zoho Inc., the dialogue attracted over 600 senior leaders. Participants included C-suite executives, policymakers, and researchers from various sectors, including government, industry, academia, and the innovation ecosystem across Canada. The gathering aimed to enhance collaboration in the field of artificial intelligence.

Dinesh K. Patnaik, the High Commissioner of India to Canada, emphasized the significance of the dialogue, stating, “The India-Canada AI Dialogue 2026 reflects our shared vision for shaping the future of artificial intelligence responsibly. As we build momentum toward the India AI Impact Summit 2026 in New Delhi, this engagement highlights how trusted partners like Canada can collaborate with India to drive innovation that is inclusive, ethical, and globally relevant.”

Canadian Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, addressed the attendees, noting, “AI is no longer an abstract or future-facing conversation — it’s shaping how we work, govern, and relate to one another. What makes the India-Canada AI Dialogue so important is that it puts impact, accountability, and human outcomes at the center of the discussion. India and Canada bring different strengths, but a shared responsibility: to make sure this technology serves people, strengthens societies, and delivers real economic value.”

Doug Ford, the Premier of Ontario, also shared his insights on the dialogue’s significance, stating, “India and Canada share a deep and long-standing partnership, one built on robust trade and investment, people-to-people ties, and research partnerships in emerging technologies such as artificial intelligence.”

The dialogue serves as a platform for both nations to explore innovative solutions in AI while ensuring that ethical considerations remain at the forefront of technological advancements. As the world increasingly relies on AI, the collaboration between India and Canada is poised to set a precedent for responsible AI development globally.

According to IANS, the event marks a significant step in enhancing the Canada-India relationship in the tech sector, particularly in artificial intelligence.

Indian-American Anjeneya Dubey Appointed CTO of Imagine Learning

Anjeneya Dubey, an Indian American cloud and AI leader, has been appointed Chief Technology Officer at Imagine Learning to enhance its AI-driven educational solutions.

Anjeneya Dubey, a prominent Indian American leader in cloud and artificial intelligence, has joined Imagine Learning as Chief Technology Officer (CTO). In this role, he will focus on advancing the company’s Curriculum-Informed AI roadmap, which aims to enhance educator-trusted platforms that connect curriculum, insights, and educational impact.

Imagine Learning, based in Tempe, Arizona, is recognized as a leading provider of digital-first K–12 solutions in the United States. Dubey’s appointment is part of the company’s strategy to ensure that instructional rigor, educator trust, and adaptive innovation remain central to every product experience.

With over two decades of global experience in software engineering, AI innovation, and cloud platforms, Dubey brings a wealth of expertise to his new position. Most recently, he served as the Global Head of Platform Engineering at Honeywell, where he led engineering efforts for digital education platforms used across both K–12 and higher education sectors.

Leslie Curtis, Executive Vice President and Chief Administrative Officer of Imagine Learning, expressed enthusiasm about Dubey’s appointment. “As we build the next era of learning technology, we are investing in leadership that understands both the complexity of enterprise-scale systems and the nuance of classroom impact,” she stated. “Anj’s deep background in SaaS products, data and AI platforms, and developer productivity makes him the ideal leader to power our next wave of curriculum-aligned innovation.”

Dubey’s extensive experience includes building Software as a Service (SaaS) platforms and AI-powered delivery pipelines. He has overseen global cloud infrastructure across major platforms such as AWS, Azure, and Google Cloud Platform (GCP), and has led teams of over 400 engineers across five regions. His contributions to the field are further underscored by multiple patents in hybrid and multi-cloud architectures, as well as the design of platforms serving more than 21 million users in both educational and industrial domains.

In his own words, Dubey expressed excitement about joining Imagine Learning at a crucial time. “This role is a chance to shape how AI can responsibly enhance instructional outcomes, deepen personalization, and support the educators who drive student success every day,” he said. “Our goal is to bring meaningful technology to classrooms — not just automation, but intelligence that understands and elevates learning.”

Dubey’s appointment reflects a broader trend within the education industry, which is increasingly seeking executive talent from cloud-native and AI-forward organizations. Imagine Learning’s strategic move underscores its commitment to maintaining its position as a market leader focused on instructional quality and platform intelligence.

As CTO, Dubey will oversee Imagine Learning’s engineering, DevOps, AI/ML, and cloud teams. His initial initiatives will focus on strengthening the company’s curricula data pipeline, accelerating time-to-insight for educators, and enhancing product reliability for over 18 million students across the nation.

Dubey holds a Bachelor of Technology degree in Electronics and Communication from Madan Mohan Malaviya University of Technology in India, as well as an Executive Certificate in Business Administration and Management from the Mendoza College of Business at the University of Notre Dame.

This appointment marks a significant step for Imagine Learning as it continues to innovate and adapt in the rapidly evolving landscape of educational technology, according to a company release.

Harvard Physicist Suggests Interstellar Object May Be Alien Probe

The discovery of a massive interstellar object, 3I/ATLAS, has sparked speculation among scientists, including a Harvard physicist, about its potential technological origins.

A recently discovered interstellar object, known as 3I/ATLAS, is raising eyebrows among astronomers due to its unusual characteristics. Harvard physicist Dr. Avi Loeb suggests that the object’s peculiar features may indicate it is more than just a typical comet.

“Maybe the trajectory was designed,” Dr. Loeb, a science professor at Harvard University, told Fox News Digital. “If it had an objective to sort of be on a reconnaissance mission, to either send mini probes to those planets or monitor them… It seems quite anomalous.”

First detected in early July by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Chile, 3I/ATLAS marks only the third time an interstellar object has been observed entering our solar system, according to NASA.

While NASA has classified 3I/ATLAS as a comet, Dr. Loeb pointed out that images of the object reveal an unexpected glow appearing in front of it, rather than the typical tail that comets exhibit. “Usually with comets, you have a tail, a cometary tail, where dust and gas are shining, reflecting sunlight, and that’s the signature of a comet,” he explained. “Here, you see a glow in front of it, not behind it.”

Measuring approximately 20 kilometers across, 3I/ATLAS is larger than Manhattan and is unusually bright given its distance from the sun. However, Dr. Loeb emphasizes that its most striking feature is its trajectory.

“If you imagine objects entering the solar system from random directions, just one in 500 of them would be aligned so well with the orbits of the planets,” he noted. The interstellar object, which originates from the center of the Milky Way galaxy, is expected to pass near Mars, Venus, and Jupiter—an event that Dr. Loeb claims is highly improbable to occur by chance.

“It also comes close to each of them, with a probability of one in 20,000,” he added.

According to NASA, 3I/ATLAS will reach its closest point to the sun—approximately 130 million miles away—on October 30.

“If it turns out to be technological, it would obviously have a big impact on the future of humanity,” Dr. Loeb stated. “We have to decide how to respond to that.”

In January, astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics mistakenly identified a Tesla Roadster launched into orbit by SpaceX CEO Elon Musk as an asteroid, highlighting the complexities of identifying objects in space.

A spokesperson for NASA did not immediately respond to requests for comment regarding 3I/ATLAS, leaving the scientific community eager for further insights into this intriguing interstellar visitor.

As the object approaches its closest point to the sun, the implications of its unusual characteristics continue to fuel speculation and debate among astronomers and physicists alike, according to Fox News.

Apple Alerts Users to Security Vulnerability in Millions of iPhones

Apple has issued a warning that a significant security flaw affects approximately 800 million iPhones, urging users to update to iOS 26.2 to mitigate critical vulnerabilities in Safari and WebKit.

Apple’s iPhone, the leading smartphone in the United States and widely used globally, is facing a serious security threat. Recent data indicates that a critical vulnerability could potentially impact around half of all iPhone users, leaving hundreds of millions of devices at risk.

Over the past few weeks, Apple has been alerting users to a significant security flaw that affects an estimated 800 million devices. This vulnerability stems from two critical issues identified in WebKit, the underlying engine that powers Safari and other browsers on iOS. According to Apple, these flaws have been exploited in sophisticated attacks targeting specific individuals, enabling malicious websites to execute harmful code on iPhones and iPads. This could allow attackers to gain control of the device, steal passwords, or access sensitive payment information simply by visiting a compromised site.

In response to this threat, Apple quickly released a software update to address the vulnerabilities. However, reports suggest that many users have yet to install the necessary update. Estimates indicate that approximately 50 percent of eligible users have not upgraded from iOS 18 to the latest version, iOS 26.2. This leaves a staggering number of devices vulnerable worldwide. According to data from StatCounter, the situation may be even more dire, with only about 20 percent of users having completed the update so far. As security details become public, the risk of exploitation increases significantly, as attackers are aware of the vulnerabilities to target.

Apple has specified that certain devices are affected by this vulnerability if they remain unupdated. Users are strongly encouraged to check their devices and ensure they have installed the latest software to protect against potential attacks.

There is no simple setting or browsing habit that can mitigate this issue; the vulnerability is embedded deep within the browser engine. Security experts emphasize that the only effective defense is to install the latest software update. Apple has also discontinued offering a security-only update for users who wish to remain on iOS 18. Unless a device cannot support iOS 26, the fix is only available through the latest versions of iOS 26.2 and iPadOS 26.2.

Updating is generally a straightforward process. If automatic updates are enabled, users may already have the fix installed. For those who need to update manually, the following steps are recommended: ensure the device is connected to Wi-Fi and has sufficient battery life or is plugged in for the update process.

While keeping your iPhone updated is crucial, it should not be the sole line of defense against threats. Utilizing strong antivirus software can provide an additional layer of protection by scanning for malicious links, blocking risky websites, and alerting users to suspicious activity before any damage occurs. This is particularly important given that many attacks exploit compromised websites or hidden browser vulnerabilities. Security software can help identify threats that may slip through and offer greater visibility into device activity.

Think of antivirus software as a backup protection measure. Software updates close known vulnerabilities, while robust antivirus tools help guard against emerging threats.

Apple’s use of the term “extremely sophisticated” in describing the threat underscores the seriousness of the situation. This flaw illustrates how even trusted browsers can become pathways for attacks when updates are delayed. Users who rely on their iPhones for banking, shopping, or work should treat this update as urgent.

As the landscape of cybersecurity continues to evolve, users are left to consider how long they typically wait before installing major iPhone updates. Is that delay worth the risk? Feedback and insights can be shared at Cyberguy.com.

For further information on the best antivirus protection options for Windows, Mac, Android, and iOS devices, visit Cyberguy.com.

According to CyberGuy.com, staying informed and proactive about software updates is essential for maintaining device security.

Andreessen Horowitz Invests $3 Billion in AI Infrastructure Development

Venture capital firm Andreessen Horowitz has made a significant investment of $3 billion in artificial intelligence infrastructure, reflecting its confidence in the sector’s long-term growth potential.

Andreessen Horowitz, one of Silicon Valley’s most influential venture capital firms, is making a bold investment in the future of artificial intelligence (AI), but its approach diverges from the trends seen in the industry.

Commonly referred to as a16z, the firm has committed approximately $3 billion to companies focused on developing the software infrastructure that supports AI. This investment highlights both a strong belief in the long-term growth of AI and a cautious stance regarding the inflated valuations that have characterized the industry in recent years.

In 2024, Andreessen Horowitz launched a dedicated AI infrastructure fund with an initial investment of $1.25 billion. This fund specifically targets startups that create essential tools for developers and enterprises, rather than the more glamorous consumer products dominating headlines. In January, the firm announced an additional investment of around $1.7 billion, bringing its total commitment to approximately $3 billion.

The focus of this fund is on what a16z defines as AI infrastructure. This includes systems that assist technical teams in building, securing, and deploying AI technologies. Key areas of investment encompass coding platforms, foundational model technologies, and networking security tools that are integral to the operation of AI systems.

This strategic move reflects a nuanced understanding of the current landscape, often referred to as the AI bubble. While soaring valuations have drawn parallels to previous tech booms, leaders at Andreessen Horowitz assert that the current frenzy obscures significant advancements occurring beneath the surface.

“Some of the most important companies of tomorrow will be infrastructure companies,” stated Raghuram, a managing partner at the firm and former CEO of VMware, in a recent statement.

The firm’s investment strategy is already yielding positive results. Several AI startups backed by Andreessen Horowitz have achieved lucrative exits or formed valuable partnerships. For instance, Stripe announced its acquisition of Metronome, an AI billing platform supported by the fund, for approximately $1 billion. Additionally, major tech corporations such as Salesforce and Meta have acquired other AI services backed by the firm.

One notable success story is Cursor, an AI coding startup whose valuation skyrocketed to about $29.3 billion last year, a remarkable increase from the $400 million valuation at the time of Andreessen Horowitz’s initial investment.

Despite these successes, concerns linger regarding the overall health of the industry. Critics argue that many private valuations are disconnected from sustainable business fundamentals, with some startups being valued as if they are poised to revolutionize entire sectors overnight.

Ben Horowitz, co-founder and general partner of Andreessen Horowitz, acknowledged that it is premature to draw definitive conclusions about the fund’s performance, which is typically assessed over a decade or more. Nevertheless, he described the fund as “one of the best funds, like, I’ve ever seen.”

The investment strategy is supported by a leadership team that brings a diverse perspective to the table. Martin Casado, a former computational physicist and seasoned coder who oversees the infrastructure unit, noted that while private valuations may appear “crazy,” the demand for AI-focused tools and services remains strong.

Industry analysts suggest that even if certain segments of the market experience a slowdown, a focus on foundational software—rather than merely trendy applications—could position Andreessen Horowitz favorably for the long term.

As the tech sector continues to evolve, the implications of this $3 billion investment will be closely monitored. Whether it will prove successful during a potential tech downturn or reshape how companies implement AI remains one of the most anticipated experiments in the industry.

According to The American Bazaar, Andreessen Horowitz’s strategic focus on AI infrastructure positions it uniquely within a rapidly changing technological landscape.

Novartis Appoints Indian-American Gayathri Raghupathy as Executive Director of AI and Process Excellence

Novartis has appointed Gayathri Raghupathy as Executive Director of Functional AI and Process Excellence, where she will leverage AI to enhance processes and focus on patient care.

Leading innovative medicines company Novartis has announced the appointment of Indian American scientist Gayathri Raghupathy as Executive Director of Functional AI and Process Excellence in U.S. Medical.

In her new role, Raghupathy will collaborate with cross-functional teams to harness artificial intelligence, reimagine critical processes, and scale intelligent solutions that prioritize science and patient care, according to a media release.

“Excited to share about joining Novartis,” Raghupathy expressed on LinkedIn. “I will be working with some amazing teams to harness AI, reimagine processes, and scale intelligent solutions that free us to focus on what matters most: science and patients.”

She also reflected on her career journey, stating, “Grateful for the journey from the lab to medical communications to building AI products in a startup environment, and for the incredible partners who helped shape this path. There’s so much to learn and grow into, and I can’t imagine a better place than Novartis, with its deep commitment to innovation and patients.”

Raghupathy describes herself as a “scientist turned AI strategist who loves turning fuzzy challenges into clear AI opportunities.” She emphasizes her focus on creating AI solutions that address real pain points, connecting various domains such as science, data, process, and operations to design scalable solutions.

“I thrive in fast-paced, 0-to-1 environments where experimentation and teamwork drive progress,” she noted. “Always curious, always learning, and excited about the next wave of human-centered AI in healthcare.”

Prior to her role at Novartis, Raghupathy spent over six years at Kognitic, Inc., a startup where she played a pivotal role in shaping the scientific and business strategy behind AI-enabled intelligence solutions. Most recently, she served as Chief Strategy Officer, having previously held positions such as Vice President of Scientific Strategy and Lead of Scientific & Business Strategy. Her work at Kognitic included driving product innovation, enhancing data quality processes, and collaborating with marketing and medical affairs leaders in the pharmaceutical sector to achieve comprehensive outcomes.

Earlier in her career, Raghupathy worked at BGB Group as a Medical Writer, where she supported scientific content development across various initiatives, including congress planning, promotional strategy, competitive intelligence, and digital education. She also created physician-facing materials and training assets for medical and commercial teams.

Raghupathy’s foundational experience includes co-founding CUNY Biotech and GRO-Biotech, community-led initiatives aimed at connecting life-science researchers with the biopharma ecosystem. Her academic background features a PhD in Molecular, Cell, and Developmental Biology from the Graduate Center at the City University of New York, where her research focused on gene regulation relevant to advancements in T-cell gene therapy.

As she embarks on this new chapter at Novartis, Raghupathy is poised to make significant contributions to the integration of AI in healthcare, ultimately enhancing patient outcomes and driving innovation in the medical field.

The information in this article is based on a media release from Novartis.

Fiber Broadband Provider Investigates Data Breach Impacting One Million Users

Brightspeed is investigating a potential security breach that may have exposed sensitive data of over 1 million customers, as hackers claim to have accessed personal and payment information.

Brightspeed, one of the largest fiber broadband providers in the United States, is currently investigating claims of a significant security breach that allegedly involves sensitive data tied to more than 1 million customers. The allegations emerged when a group identifying itself as the Crimson Collective posted messages on Telegram, warning Brightspeed employees to check their emails. The group asserts it has access to over 1 million residential customer records and has threatened to release sample data if the company does not respond.

As of now, Brightspeed has not confirmed any breach. However, the company stated that it is actively investigating what it refers to as a potential cybersecurity event. According to the Crimson Collective, the stolen data includes a wide array of personally identifiable information. If these claims are accurate, the data could pose serious risks for identity theft and fraud for affected customers.

Brightspeed has emphasized its commitment to addressing the situation. In a statement shared with BleepingComputer, the company indicated that it is rigorously monitoring threats and working to understand the circumstances surrounding the alleged breach. Brightspeed also mentioned that it will keep customers, employees, and authorities informed as more details become available.

Despite the ongoing investigation, there has been no public notice on Brightspeed’s website or social media channels confirming any exposure of customer data. Founded in 2022, Brightspeed is a U.S. telecommunications and internet service provider that emerged after Apollo Global Management acquired local exchange assets from Lumen Technologies. Headquartered in Charlotte, North Carolina, the company serves rural and suburban communities across 20 states and has rapidly expanded its fiber footprint, reaching over 2 million homes and businesses with plans to extend to over 5 million locations.

Given Brightspeed’s focus on underserved areas, many customers rely on the company as their primary internet provider, making any potential breach particularly concerning. The Crimson Collective is not new to targeting high-profile entities. In October, the group breached a GitLab instance associated with Red Hat, stealing hundreds of gigabytes of internal development data. This incident later had repercussions, as Nissan confirmed in December that personal data for approximately 21,000 Japanese customers was exposed through the same breach.

More recently, researchers have noted that the Crimson Collective has targeted cloud environments, including Amazon Web Services, by exploiting exposed credentials and creating unauthorized access accounts to escalate privileges. This track record adds weight to the group’s claims, making them difficult to dismiss.

Even though Brightspeed has yet to confirm a breach, the mere existence of these claims raises significant concerns. If customer data has indeed been accessed, it could be exploited for phishing scams, account takeovers, or payment fraud. Cybercriminals often act quickly following breaches, which means customers should remain vigilant even before an official notice is issued.

A spokesperson for Brightspeed stated, “We take the security of our networks and the protection of our customers’ and employees’ information seriously and are rigorous in securing our networks and monitoring threats. We are currently investigating reports of a cybersecurity event. As we learn more, we will keep our customers, employees, stakeholders, and authorities informed.”

While the investigation unfolds, customers are encouraged to take proactive steps to protect themselves. Most data breaches lead to similar downstream risks, including phishing scams, account takeovers, and identity theft. Establishing good security habits now can help safeguard online accounts.

Scammers often exploit breach headlines to create panic. Customers should be cautious with emails, calls, or texts that mention internet account billing problems or service changes. If a message creates a sense of urgency or pressure, it is advisable to pause before responding. Avoid clicking on links or opening attachments related to account notices or payment issues. Instead, open a new browser window and navigate directly to the company’s official website or app.

Utilizing strong antivirus software can provide an additional layer of protection against malicious downloads. This software can also alert users to phishing emails and ransomware scams, helping to keep personal information and digital assets secure.

Changing Brightspeed account passwords and reviewing passwords for other important accounts is also recommended. Users should create strong, unique passwords that are not reused elsewhere. A trusted password manager can assist in generating and storing complex passwords, making account takeovers more difficult.

Customers should also check if their email addresses have been exposed in past breaches. Some password managers include built-in breach scanners that can identify whether email addresses or passwords have appeared in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Personal data can quietly circulate across data broker sites. Employing a data removal service can help limit the amount of personal information available publicly. While no service can guarantee complete removal of data from the internet, these services actively monitor and systematically erase personal information from numerous websites, reducing the risk of scammers targeting individuals.

Brightspeed allows customers to activate account and billing alerts through the My Brightspeed site or app. Users can select which notifications they wish to receive via email or text. These alerts can help detect unusual activity early and enable prompt responses to potential threats.

Regularly checking bank and credit card statements is also advisable. Customers should look for small or unfamiliar charges, as criminals may test stolen data with low-dollar transactions before attempting larger fraud. If sensitive information may have been compromised, placing a fraud alert or credit freeze can provide additional protection, making it more challenging for criminals to open new accounts in a victim’s name.

Brightspeed’s investigation is ongoing, and the company has pledged to share updates as more information becomes available. The situation underscores the increasing value of customer data and the aggressive tactics employed by extortion groups targeting infrastructure providers. For customers, exercising caution remains the best defense, while transparency and prompt action will be crucial for companies if these claims prove to be valid.

For more information on protecting personal data and staying informed about cybersecurity threats, visit CyberGuy.com.

WhatsApp Web Malware Automatically Distributes Banking Trojan to Users

A new malware campaign is exploiting WhatsApp Web to spread Astaroth banking trojan through trusted conversations, posing significant risks to users.

A recent malware campaign is transforming WhatsApp Web into a tool for cybercriminals. Security researchers have identified a banking Trojan linked to Astaroth that spreads automatically through chat messages, complicating efforts to halt the attack once it begins. This campaign, dubbed Boto Cor-de-Rosa, highlights the evolving tactics of cybercriminals who exploit trusted communication platforms.

The attack primarily targets Windows users, utilizing WhatsApp Web as both the delivery mechanism and the means of further spreading the infection. The process begins innocuously with a message from a contact containing what appears to be a harmless ZIP file. The file name is designed to look random and benign, which reduces the likelihood of suspicion.

Upon opening the ZIP file, users unwittingly execute a Visual Basic script disguised as a standard document. If the script is run, it quietly downloads two additional pieces of malware, including the Astaroth banking trojan, which is written in Delphi. Additionally, a Python-based module is installed to control WhatsApp Web, allowing the malware to operate in the background without any obvious warning signs. This self-sustaining infection mechanism makes the campaign particularly dangerous.

What sets this campaign apart is its method of propagation. The Python module scans the victim’s WhatsApp contacts and automatically sends the malicious ZIP file to every conversation. Researchers from Acronis have noted that the malware even tailors its messages based on the time of day, often including friendly greetings to make the communication feel familiar. Messages such as “Here is the requested file. If you have any questions, I’m available!” appear to come from trusted contacts, leading many recipients to open them without hesitation.

The malware is also designed to monitor its own effectiveness in real time. The propagation tool tracks the number of successfully delivered messages, failed attempts, and the overall sending speed. After every 50 messages, it generates progress updates, allowing attackers to measure their success quickly and adapt their strategies as needed.

To evade detection by antivirus software, the initial script is heavily obfuscated. Once executed, it launches PowerShell commands that download additional malware from compromised websites, including a known domain, coffe-estilo.com. The malware installs itself in a folder that mimics a Microsoft Edge cache directory, containing executable files and libraries that comprise the full Astaroth banking payload. This allows the malware to steal credentials, monitor user activity, and potentially access financial accounts.

WhatsApp Web’s popularity stems from its ability to mirror phone conversations on a computer, making it convenient for users to send messages and share files. However, this convenience also introduces significant risks. When users connect their phones to WhatsApp Web by scanning a QR code at web.whatsapp.com, the browser session becomes a trusted extension of their account. This means that if malware gains access to a computer with an active WhatsApp Web session, it can act on behalf of the user, reading messages, accessing contact lists, and sending files that appear legitimate.

This exploitation of WhatsApp Web as a delivery system for malware is particularly concerning. Rather than infiltrating WhatsApp itself, attackers take advantage of an open browser session to spread malicious files automatically. Many users remain unaware of the potential dangers, as WhatsApp Web often feels harmless and is frequently left signed in on shared or public computers. In these scenarios, malware does not require sophisticated methods; it simply needs access to a trusted session.

To mitigate the risks associated with this type of malware, users should adopt several smart habits. First and foremost, never open ZIP files sent through chat unless you have confirmed the sender’s identity. Be cautious of file names that appear random or unfamiliar, and treat messages that create a sense of urgency or familiarity as potential warning signs. If a file arrives unexpectedly, take a moment to verify its authenticity before clicking.

Additionally, users should regularly check active WhatsApp Web sessions and log out of any that are unrecognized. Avoid leaving WhatsApp Web signed in on shared or public computers, and enable two-factor authentication (2FA) within WhatsApp settings. Limiting web access can significantly reduce the potential spread of malware.

Keeping devices updated is also crucial. Installing Windows updates promptly and ensuring that web browsers are fully updated can close many vulnerabilities that attackers exploit. Strong antivirus software is essential for monitoring script abuse and PowerShell activity in real time, providing an additional layer of protection against malware.

Banking malware is often associated with identity theft and financial fraud. To minimize the fallout from such attacks, consider reducing your digital footprint. Data removal services can assist in removing personal information from data broker sites, making it harder for criminals to exploit your details if malware infiltrates your device. While no service can guarantee complete data removal from the internet, these services actively monitor and erase personal information from numerous websites, enhancing your privacy.

Even with robust security measures in place, financial monitoring adds another layer of protection. Identity theft protection services can track suspicious activity related to your credit and personal data, alerting you if your information is being sold on the dark web or used to open unauthorized accounts. Setting up alerts for bank and credit card transactions can help you respond quickly to any irregularities.

Most malware infections occur when users act too quickly. If a message feels suspicious, trust your instincts. Familiar names and friendly language can lower your guard, but they should never replace caution. Taking a moment to verify the authenticity of a message or file can prevent significant damage.

This WhatsApp Web malware campaign serves as a stark reminder that cyberattacks are increasingly sophisticated, often blending seamlessly into everyday conversations. The ease with which this threat can spread from one device to many is alarming. A single click can transform a trusted chat into a vehicle for banking malware and identity theft. Fortunately, simple changes in behavior, such as being vigilant about attachments, securing WhatsApp Web access, keeping devices updated, and exercising caution before clicking, can significantly reduce the risk of falling victim to such attacks.

As messaging platforms continue to play a larger role in our daily lives, maintaining awareness and adopting simple security habits is essential. Do you believe messaging apps are doing enough to protect users from malware that spreads through trusted conversations? Share your thoughts with us.

According to Source Name.

India’s Vision for AI Discussed at Washington Embassy Meeting

India’s Deputy Chief of Mission in Washington outlined the nation’s vision for artificial intelligence at a recent event, emphasizing the upcoming AI Impact Summit’s focus on practical outcomes for people, the planet, and progress.

WASHINGTON, DC — India is set to host the AI Impact Summit in New Delhi, which will revolve around three core themes: people, planet, and progress. The summit aims to transition global discussions on artificial intelligence from theoretical principles to actionable outcomes, according to Namgya Khampa, India’s Deputy Chief of Mission in Washington.

Khampa made these remarks during the “US-India Strategic Cooperation on AI” discussion, organized by the Observer Research Foundation America, the Special Competitive Studies Project, and the Embassy of India. The event, held at the US Capitol, convened policymakers and experts to outline shared priorities ahead of the summit.

She emphasized that artificial intelligence has evolved from a niche technology into a fundamental component that shapes economic competitiveness, geopolitical power, and societal outcomes.

India’s approach to AI is deeply rooted in its experience with digital public infrastructure. Khampa highlighted how inclusive, interoperable, and cost-effective technology has the potential to transform governance on a large scale. She pointed to platforms like Aadhaar and the Unified Payments Interface, which have significantly expanded access to public services, finance, and identity for over 1.4 billion Indians.

Khampa described AI as a “force multiplier” that enhances existing digital public infrastructure, making systems smarter, more responsive, productive, and accessible. This perspective aims to shift AI from being an abstract concept to a practical tool that drives transformation in everyday life.

The AI Impact Summit is notable for being the first major global AI summit hosted by a country from the Global South. Khampa stated that the summit seeks to address imbalances in global AI governance by promoting broader participation and ownership, rather than compromising on standards.

She elaborated on the summit’s framework, reiterating the themes of people, planet, and progress, which reflect India’s vision of “AI for all.” According to Khampa, AI should empower individuals rather than marginalize them, be resource-efficient, align with sustainability goals, and foster equitable economic growth, particularly in sectors like healthcare, education, agriculture, and public service delivery.

In light of increasing geopolitical tensions and the weaponization of technology supply chains, Khampa noted that technological resilience has become a central aspect of national strategy. She highlighted the India-US trust initiative as a means to transition cooperation from conceptual discussions to concrete projects across research, standards, skill development, and next-generation technologies.

India’s linguistic diversity and its population-scale digital platforms provide a unique environment for developing inclusive, multilingual AI systems. Meanwhile, the United States contributes cutting-edge research, capital, and advanced use cases that can be tested in India and scaled globally.

As the AI Impact Summit approaches, it is clear that India is positioning itself as a leader in the global dialogue on artificial intelligence, advocating for a vision that prioritizes inclusivity, sustainability, and practical benefits for all.

According to IANS, the summit is expected to set a precedent for future discussions on AI governance and cooperation.

OpenAI Introduces Advertising Features to ChatGPT Platform

OpenAI is set to introduce advertising in ChatGPT for U.S. users on its free and Go-tier plans, marking a significant shift in its revenue strategy.

OpenAI is preparing to test advertisements within ChatGPT, targeting users of its free version and the newly launched Go-tier plan in the United States. This initiative aims to alleviate the financial pressures associated with developing and maintaining advanced artificial intelligence systems.

The company announced on Friday that the ads will begin appearing in the coming weeks, clearly distinguished from the AI-generated responses that users receive. Users subscribed to OpenAI’s higher-tier plans—Plus, Pro, Business, and Enterprise—will not encounter these advertisements.

OpenAI emphasized that the introduction of ads will not affect the quality or integrity of ChatGPT’s responses. Furthermore, user conversations will remain confidential and will not be shared with advertisers.

This move represents a significant shift for OpenAI, which has primarily relied on subscription revenue up to this point. It also highlights the increasing financial challenges the company faces as it invests billions in data centers and prepares for a highly anticipated initial public offering.

Despite currently operating at a loss, OpenAI has projected that it will spend over $1 trillion on AI infrastructure by 2030. However, the company has yet to disclose a detailed plan for funding this extensive expansion.

Industry analysts suggest that advertising could become a vital new revenue stream for ChatGPT, which currently boasts approximately 800 million weekly active users. Nevertheless, they caution that this strategy carries inherent risks, including the potential to alienate users and diminish trust if the ads are perceived as intrusive or poorly integrated.

“If ads come off as clumsy or opportunistic, people won’t hesitate to jump ship,” warned Jeremy Goldman, an analyst at Emarketer. He noted that alternatives like Google’s Gemini or Anthropic’s Claude are readily available to users seeking ad-free experiences.

Goldman also indicated that OpenAI’s decision to incorporate ads could have broader implications for the industry, compelling competitors to clarify their own monetization strategies, particularly those that promote themselves as “ad-free by design.”

OpenAI has assured users that advertisements will not be displayed to individuals under the age of 18 and that sensitive topics, such as health and politics, will be excluded from advertising content.

According to the company, ads will be tested at the bottom of ChatGPT responses when relevant sponsored products or services align with the ongoing conversation. This approach aims to ensure that advertisements are contextually appropriate and minimally disruptive.

Advertisers are increasingly optimistic about AI’s potential to enhance results across search and social media platforms, believing that more sophisticated recommendation systems will lead to more effective and targeted advertising.

Additionally, OpenAI confirmed that its ChatGPT Go plan, initially launched in India, will soon be available in the U.S. at a monthly subscription price of $8.

This new advertising initiative marks a pivotal moment for OpenAI as it seeks to balance user experience with the need for sustainable revenue growth, navigating the challenges of an evolving digital landscape.

For more details, refer to American Bazaar.

Humans in the Loop: Tribal Wisdom and AI Bias Challenges

Independent film ‘Humans in the Loop’ explores the intersection of tribal wisdom and artificial intelligence, highlighting the importance of human input in technology.

Independent films often struggle to find their footing in the vast landscape of mainstream cinema. However, Humans in the Loop (2024), now streaming on Netflix, has carved out a niche for itself, thanks in part to the involvement of executive producer Kiran Rao. The film draws inspiration from a 2022 article by journalist Karishma Mehrotra in FiftyTwo, titled “Human Touch.” It follows the story of Nehma, an Adivasi woman from the Oraon tribe in Jharkhand, who returns to her ancestral village after a broken relationship and faces the challenge of supporting her children.

To make ends meet, Nehma takes a job as a data labeller at an AI data center, where she assigns labels to images and videos to help train AI systems. As she immerses herself in this work, she begins to recognize that the categories she is asked to define and the systems she is contributing to may harbor biases that are disconnected from her cultural understanding of nature, community, and labor.

One of the film’s emotional cores lies in the relationship between Nehma and her daughter, Dhaanu. While Dhaanu is drawn toward the urban world, Nehma feels a strong pull back to her land and traditions. Yet, she is also compelled to embrace this new mode of work. The film captures this dynamic beautifully, avoiding forced sentimentality.

Watching Humans in the Loop evokes a sense of quiet tension, navigating the complexities of place and displacement, tradition and technology, caregiving and coded labor. Viewers find themselves rooting for Nehma not only as a mother striving to support her children but also as a subtle force challenging conventional notions of progress.

The film employs contrasting spaces to enhance its narrative: the lush, vibrant village juxtaposed with the sterile, screen-filled environment of the data lab. These visual contrasts underscore the film’s exploration of loops—nature versus technology, labor versus identity, home versus exile. The sound design is particularly evocative, intertwining the natural sounds of the forest with the digital hum of the lab, creating a soulful auditory backdrop.

In addressing the theme of AI’s potential to enhance tribal lives, the film does not take an anti-AI stance. Instead, it posits that when AI systems integrate the labor, perspectives, and knowledge of tribal communities, they can become tools of recognition and empowerment. Nehma’s insistence on shaping the labels and incorporating her lived ecological knowledge into the system illustrates that technology can serve as a site of agency rather than mere extraction.

This hopeful loop suggests that humans can train machines, and in turn, the outputs of these machines can reflect that training. Nehma’s journey emphasizes that individuals can learn not only to survive but also to assert their knowledge. When approached ethically and collaboratively, AI can become part of a cycle of continuity, serving not as a break from tradition but as a tool to sustain and evolve it.

Titled after the human-in-the-loop (HITL) approach, which actively integrates human input and expertise into machine learning and AI systems, Humans in the Loop stands as a quietly significant film. Director Aranya Sahay has crafted a narrative that speaks to the age of AI while honoring the human experience—the laborer, the mother, the land. As discussions surrounding AI and equity continue to grow, this film is poised to resonate even more deeply over time, according to India Currents.

GTA 6 Online Mode Details Leaked in Court Documents

New details about GTA 6’s online mode have emerged from court documents, suggesting the game may feature 32-player lobbies ahead of its anticipated release on November 19, 2026.

New insights into the online mode of Grand Theft Auto VI (GTA 6) have surfaced from court documents related to a legal dispute involving Rockstar Games and its former employees. This information, which has not been officially confirmed by Rockstar, offers a glimpse into the multiplayer component of the highly anticipated game, set to be released on November 19, 2026, for PlayStation 5 and Xbox Series X/S.

Rockstar has maintained a tight lid on the details surrounding GTA 6’s multiplayer features. However, recent revelations from a tribunal in the UK indicate that the game may support up to 32 players in a single session, mirroring the current setup in GTA Online.

The details emerged during a legal hearing concerning the termination of over 30 developers at Rockstar, which is tied to allegations of leaking confidential information on a private Discord channel associated with the Independent Workers’ Union of Great Britain (IWGB). During the proceedings, Rockstar disclosed that certain internal messages discussed game features deemed “top secret.” Among these was a reference to a “large session” involving 32 players, which many have interpreted as a significant hint regarding the online mode.

According to the court documents, the leaked information stemmed from internal Discord messages where a former employee noted that Rockstar faced challenges in organizing playtests due to the need for 32-player sessions. Another developer questioned the difficulty of arranging such sessions, suggesting that multiple studios with quality assurance testers should be able to manage it.

While Rockstar has yet to officially confirm any multiplayer features for GTA 6, the leak aligns with the existing 32-player limit in GTA Online, providing one of the clearest indications of the online ambitions for the upcoming title.

Fans of the franchise have high expectations for GTA 6 Online, particularly given the success of GTA Online, which set a high standard for open-world multiplayer experiences. Many anticipate that the new installment will introduce innovative mechanics, expansive maps, fresh missions, and enhanced social features. Currently, the only confirmed detail is the proposed 32-player limit for at least one type of online session.

In the midst of these developments, Rockstar has defended its decision to terminate the employees, asserting that the dismissals were due to the leaking of confidential information rather than any union-related activities. The company claims that sharing sensitive game details violated internal policies. Conversely, the IWGB and the dismissed developers contend that the firings were unjust and linked to union activism.

A recent ruling by a UK judge determined that Rockstar is not obligated to provide interim back pay to the terminated staff, which supports the studio’s position regarding confidentiality breaches.

The significance of the 32-player detail lies in its origin; it comes from official court documents rather than speculative leaks. While this number may seem modest compared to earlier rumors of larger player limits, it suggests that Rockstar may be adopting a familiar multiplayer structure as a foundation for GTA 6.

It remains uncertain whether the online mode will launch with additional player limits or game modes that could accommodate more than 32 players. Rockstar has not publicly commented on these possibilities. For now, this insight derived from court proceedings offers fans their first credible look at the multiplayer potential of GTA 6 as the release date approaches.

As anticipation builds, Rockstar has officially confirmed that GTA 6 will be available on November 19, 2026, for PS5 and Xbox Series X/S, with expectations for additional platform releases to follow. Fans are eagerly awaiting what is poised to be one of the most significant gaming releases in recent years, according to The Sunday Guardian.

Taiwan Plans $250 Billion Investment in U.S. Semiconductor Manufacturing

Taiwan has committed to investing $250 billion in U.S. semiconductor manufacturing, aiming to enhance domestic production capabilities and reduce reliance on foreign supply chains.

The U.S. Department of Commerce announced on Thursday that Taiwan will invest $250 billion to bolster semiconductor manufacturing in the United States. This significant deal, signed during the Trump administration, aims to enhance domestic production capabilities in a sector critical to both the economy and national security.

Under the agreement, Taiwanese semiconductor and technology companies will make direct investments in the U.S. semiconductor industry. These investments are expected to cover a range of areas, including semiconductors, energy, and artificial intelligence (AI) production and innovation. Currently, Taiwan is responsible for producing more than half of the world’s semiconductors, highlighting its pivotal role in the global supply chain.

In addition to the direct investments, Taiwan will provide $250 billion in credit guarantees to facilitate further investments from its semiconductor and tech enterprises. However, the timeline for these investments remains unspecified.

In exchange for Taiwan’s substantial investment, the United States plans to invest in various sectors within Taiwan, including semiconductor manufacturing, defense, AI, telecommunications, and biotechnology. The specific amount of this reciprocal investment has not been disclosed.

This announcement follows a proclamation from the Trump administration that reiterated the U.S. goal of increasing domestic semiconductor manufacturing. The proclamation emphasized that reliance on foreign supply chains poses significant economic and national security risks. “Given the foundational role that semiconductors play in the modern economy and national defense, a disruption of import-reliant supply chains could strain the United States’ industrial and military capabilities,” it stated.

Additionally, the proclamation introduced a 25% tariff on certain advanced AI chips and indicated that further tariffs on semiconductors would be considered once trade negotiations with other countries, including the deal with Taiwan, are finalized.

In 2025, semiconductor manufacturing has become a focal point of Trump’s economic agenda, with efforts aimed at reducing U.S. dependence on foreign chip production. The administration has proposed aggressive trade measures, including a potential 100% tariff on imported semiconductors, although companies that commit to establishing manufacturing facilities in the U.S. may be eligible for exemptions.

Last year, Taiwan Semiconductor Manufacturing Company (TSMC) announced plans to invest $100 billion to enhance chip manufacturing capabilities in the United States, further underscoring the importance of this sector.

Semiconductors are essential components of modern technology, powering a wide array of devices, from smartphones and automobiles to telecommunications equipment and military systems. The U.S. share of global wafer fabrication has significantly declined, dropping from 37% in 1990 to less than 10% in 2024. This shift has largely been attributed to foreign industrial policies that favor production in East Asia.

As the U.S. seeks to reclaim its position in the semiconductor industry, the partnership with Taiwan represents a critical step towards enhancing domestic manufacturing capabilities and securing supply chains.

This initiative reflects a broader strategy to strengthen the U.S. economy and safeguard national interests in an increasingly competitive global landscape, according to The American Bazaar.

RCB Introduces AI Solution for Crowd Management at Chinnaswamy Stadium

RCB is investing Rs 4.5 crore in an AI-enabled project to enhance crowd management and safety at M. Chinnaswamy Stadium during IPL 2026.

Royal Challengers Bangalore (RCB) is taking a significant step towards improving the matchday experience at M. Chinnaswamy Stadium by investing Rs 4.5 crore in an innovative project aimed at crowd management and safety.

In partnership with Staqu, a technology firm specializing in artificial intelligence, RCB plans to implement advanced facial recognition and intelligent monitoring systems. This initiative is designed to enhance public safety and ensure a seamless experience for fans attending matches.

The deployment of these technologies is expected to address crowd-related issues that have been a concern in previous seasons. By utilizing AI, RCB aims to streamline entry processes and monitor crowd behavior effectively, thereby reducing the likelihood of incidents and improving overall security.

As the Indian Premier League (IPL) continues to grow in popularity, the need for enhanced safety measures has become increasingly important. RCB’s proactive approach reflects a commitment to not only provide an enjoyable atmosphere for fans but also to prioritize their safety during events.

With the introduction of this AI-enabled solution, RCB hopes to set a new standard for crowd management in sports venues across India. The project signifies a forward-thinking approach to leveraging technology in enhancing the spectator experience.

According to NDTV, the collaboration with Staqu marks a significant investment in the future of sports management, showcasing RCB’s dedication to innovation and fan engagement.

Can Autonomous Trucks Enhance Highway Safety and Reduce Accidents?

Kodiak AI’s autonomous trucks have successfully driven over 3 million miles, demonstrating the potential for self-driving technology to enhance highway safety in real-world conditions.

Kodiak AI, a prominent player in the field of AI-powered autonomous driving technology, has been quietly proving the viability of self-driving trucks on actual highways. The company’s flagship system, known as the Kodiak Driver, integrates advanced software with modular, vehicle-agnostic hardware, creating a cohesive platform designed for the complexities of real-world trucking.

As Kodiak AI explains, the Kodiak Driver is not just a theoretical solution; it is built to address the challenges of highways, varying weather conditions, driver fatigue, and the demands of long-haul transportation. This practical approach is essential, as trucking is far from a controlled laboratory environment.

In a recent episode of CyberGuy’s “Beyond Connected” podcast, Kurt spoke with Daniel Goff, vice president of external affairs at Kodiak AI, about the evolving perceptions surrounding autonomous trucks. Goff reflected on the initial skepticism the company faced when it was founded in 2018. “When I first started at the company, I said I worked for a company that was working to build trucks that drive themselves, and people kind of looked at me like I was crazy,” he recalled. However, he noted a significant shift in public sentiment as autonomous vehicles have begun to demonstrate their capabilities beyond mere hype.

One of Kodiak AI’s key arguments is that machines can mitigate many risks associated with human driving. Goff emphasized, “This technology doesn’t get distracted. It doesn’t check its phone. It doesn’t have a bad day to take it out on the road. It doesn’t speed.” In the trucking industry, where safety is paramount, these “boring” characteristics of autonomous vehicles can be advantageous.

Kodiak AI has been actively operating freight routes for several years, rather than solely conducting tests in controlled environments. The company has a command center in Lancaster, Texas, which has facilitated deliveries to cities such as Houston, Oklahoma City, and Atlanta since 2019. During these operations, a safety driver is present to take control if necessary, allowing Kodiak to refine its technology in real-world conditions.

Long-haul trucking is crucial to the U.S. economy, yet it is also one of the most demanding and hazardous professions. Drivers often spend extended periods away from home, working long hours while managing heavy vehicles under various conditions. Goff pointed out that the job’s challenges are compounded by federal regulations that limit driving hours to reduce fatigue. “Driving a truck is one of the most difficult and dangerous jobs that people do in the United States every day,” he said. With a growing number of drivers retiring and fewer individuals entering the profession, the industry is experiencing a significant driver shortage.

Kodiak AI believes that autonomous technology is best suited for the most challenging and repetitive tasks within trucking. Goff explained, “The goal for this technology is really best suited for those really tough jobs—the long lonely highway miles, the trucking in remote locations where people either don’t want to live or can’t easily live.” He also noted that many trucks are idle for a significant portion of the day, with the average truck being driven only about seven hours daily. Autonomous technology could help optimize this by enabling trucks to operate around the clock, only stopping for refueling and safety inspections.

With over 3 million miles driven, Kodiak AI has established a strong safety record, with a safety driver present for most of those miles. Goff highlighted the scale of their operations by comparing it to the average American’s lifetime driving distance of approximately 800,000 miles. “We’re at almost four average lifetimes with our system today,” he stated. The company also utilizes computer simulations and various assessments to evaluate the safety of its system.

In addition to long-haul operations, Kodiak AI collaborates with Atlas Energy Solutions for oil logistics in the Permian Basin. As of the third quarter of 2025, the company has delivered ten driverless trucks to Atlas, which autonomously transport sand around the clock without a human operator in the cab. Goff described this partnership as an ideal environment for testing and refining their long-haul operations.

Kodiak AI has sought third-party validation of its safety claims, including a study with Nauto, a leader in AI-enabled dashcams. The results indicated that Kodiak’s system achieved the highest safety score recorded by Nauto.

Policy and regulation also play a critical role in the adoption of autonomous trucking. Goff noted that 25 states have enacted laws allowing for the deployment of autonomous vehicles. He believes that the inherent dangers of driving make a compelling case for the technology. “People who think about transportation every day understand how dangerous driving a car is, driving a truck is, and just being on the road see the potential for this technology,” he said.

Despite the advancements, concerns about safety remain prevalent among advocates and everyday drivers. Critics question whether autonomous systems can respond adequately in emergencies or handle unpredictable human behavior on the road. Goff acknowledged these concerns, stating, “In this industry in particular, we really understand how important it is to be safe.” He emphasized that trust in autonomous systems must be earned through consistent real-world performance and transparent testing.

For everyday drivers, the prospect of sharing the road with autonomous vehicles can be unsettling, especially given the focus on potential failures in media coverage. However, Kodiak AI argues that the removal of human factors such as fatigue and distraction could lead to safer highways. If the technology continues to perform as claimed, it could result in fewer tired drivers on overnight routes, more reliable freight movement, and ultimately safer roads for all users.

As Kodiak AI continues to move freight and gather safety data on public roads, skepticism remains a vital aspect of the conversation surrounding autonomous trucking. The future of this technology will depend on its ability to demonstrate long-term safety benefits and earn the trust of the public, rather than relying on promises alone. The pressing question is no longer whether self-driving trucks can operate effectively, but whether they can consistently prove to enhance safety for everyone on the road.

For further insights, refer to CyberGuy.

Google Launches Program to Support Indian AI Startups Going Global

Google has launched a new Market Access Program aimed at helping Indian AI startups scale globally, coinciding with the projected growth of India’s AI market to $126 billion by 2030.

With India’s artificial intelligence (AI) market projected to reach $126 billion by 2030, Google has introduced a new Market Access Program designed to assist Indian AI startups in scaling their operations and expanding into global markets.

Announced during the Google AI Startups Conclave in New Delhi, the program aims to support startups from their initial seed stage to full-scale operations. Preeti Lobana, Vice President and Country Manager for Google India, emphasized the importance of this initiative, stating, “If you solve for India, you build for the world. Our focus now is accelerating how quickly Indian startups can scale, reach global markets, and deliver outcomes.”

Lobana noted that India’s AI startup ecosystem is entering a transformative phase, moving from prototypes to market-ready products and transitioning from early traction to sustainable business models. Google’s comprehensive support for startups encompasses capability building, real-world deployment, and scaling, addressing challenges at every critical stage of development.

The Market Access Program is specifically tailored for AI-first startups that are prepared to scale responsibly. It focuses on three key outcomes: enhancing enterprise readiness through global selling expertise, providing access to Google’s extensive enterprise network, and facilitating global immersion in key international markets.

To bolster the capabilities of these startups, Google also announced the upcoming Global AI Hub in Visakhapatnam. This facility, which will be powered by green energy and feature 1-gigawatt computing resources, is designed to equip startups with the high-performance computing necessary to refine their AI models on a global scale.

In addition to the Market Access Program, Google unveiled new updates to its Gemma model family, specifically targeting areas of rapid adoption in India, such as population-scale healthcare AI and action-oriented, on-device agents. The latest iteration, MedGemma 1.5, enhances Google’s health-focused AI initiatives by enabling developers to create applications that support complex medical imaging workflows.

The release of MedGemma 1.5 follows a collaboration between Google and the All India Institute of Medical Sciences (AIIMS), which is utilizing the model to develop India’s Health Foundation Models. This partnership contributes to the country’s Digital Public Infrastructure and enhances health outcomes across the ecosystem.

To support the growing demand for agent-based systems, Google introduced FunctionGemma, a specialized version of the Gemma 3 model. FunctionGemma is designed for function calling, allowing startups to translate natural language commands into executable actions. This capability enables the development of on-device, low-latency applications with automated workflows that prioritize user privacy and can function effectively on low-end devices without a constant internet connection.

Together, these advancements expand the toolkit available to Indian founders, facilitating the transition from experimentation to deployment across healthcare, enterprise, and consumer applications at scale. Lobana highlighted that these models are supported by popular tools throughout the development workflow, including Hugging Face Transformers, Unsloth, Keras, and NVIDIA NeMo.

Alongside the Conclave, Inc42 released the “Bharat AI Startups Report 2026,” which was supported by Google. The report reveals a significant shift in the AI ecosystem, with 47% of enterprises already moving from pilot projects to full production. It also notes a decrease in innovation costs, as historically high computing expenses have hindered Indian startups. With public resources lowering entry barriers, funding is increasingly directed toward product innovation rather than infrastructure costs.

India’s unique challenges, including its 22 languages, inconsistent connectivity, and price sensitivity, have often been viewed as obstacles. However, the report reframes these challenges as assets, suggesting that if an AI solution can effectively serve rural users in India, it is robust enough for global markets. The concept of “Bharat-tested” technology is emerging as a new benchmark for resilience.

The competitive landscape is shifting towards trust-by-design, with startups that prioritize safety, privacy, and security from the outset gaining a significant advantage in securing long-term enterprise contracts.

Ultimately, the success of AI initiatives will be measured by their outcomes. Examples include Cloudphysician, which has reduced ICU mortality rates by 40%, and Rocket Learning, which personalizes education for millions of students. Lobana concluded, “By stitching together skilling, capital, infrastructure, and market access, we are clearing the path for founders. As we look to the AI Impact Summit in February, the signal is clear: The future of AI isn’t just being used in India; it is being built here.”

According to Inc42, the launch of the Market Access Program marks a pivotal moment for Indian AI startups, positioning them to thrive in a rapidly evolving global landscape.

NASA’s Artemis II Mission Marks First Crewed Deep Space Flight in Over 50 Years

NASA is set to launch Artemis II on February 6, marking the return of humans to deep space for the first time in over 50 years with a historic 10-day mission around the Moon.

NASA has announced that it will return humans to deep space next month, targeting a launch date of February 6 for Artemis II. This 10-day crewed mission will carry astronauts around the Moon for the first time in more than half a century.

“We are going — again,” NASA stated in a post on X, confirming that the mission is scheduled to depart no earlier than February 6. The first available launch window will run from January 31 to February 14, with specific launch opportunities on February 6, 7, 8, 10, and 11.

If the launch is delayed, additional windows will open from February 28 to March 13, and from March 27 to April 10. During the February window, opportunities will be available on March 6, 7, 8, 9, and 11, while the April window will offer chances on April 1, 3, 4, 5, and 6.

The mission is set to lift off from Launch Complex 39B at NASA’s Kennedy Space Center in Florida, aboard the Space Launch System (SLS), the most powerful rocket the agency has ever constructed. Preparations are already underway to move the rocket to the launch pad, with the rollout expected to begin no earlier than January 17. This process involves a four-mile journey from the Vehicle Assembly Building to Launch Pad 39B aboard the crawler-transporter 2, which is anticipated to take up to 12 hours.

“We are moving closer to Artemis II, with rollout just around the corner,” said Lori Glaze, acting associate administrator for NASA’s Exploration Systems Development Mission Directorate. “We have important steps remaining on our path to launch, and crew safety will remain our top priority at every turn as we near humanity’s return to the Moon.”

The 322-foot rocket will carry four astronauts beyond Earth’s orbit to test the Orion spacecraft in deep space for the first time with a crew on board. This mission represents a significant milestone following the Apollo era, which last sent humans to the Moon in 1972.

The Artemis II crew includes NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with Canadian Space Agency astronaut Jeremy Hansen. This mission will be notable for being the first lunar mission to include a Canadian astronaut and the first to carry a woman beyond low Earth orbit.

After launch, the astronauts are expected to spend approximately two days near Earth to check Orion’s systems before igniting the spacecraft’s European-built service module to begin their journey toward the Moon.

This maneuver will send the spacecraft on a four-day trip around the far side of the Moon, tracing a figure-eight path that will take the crew more than 230,000 miles from Earth and thousands of miles beyond the lunar surface at its farthest point.

Rather than firing engines to return home, Orion will utilize a fuel-efficient free-return trajectory that leverages the gravitational forces of both Earth and the Moon to guide the spacecraft back to Earth during the roughly four-day return trip.

The mission will conclude with a high-speed reentry and splashdown in the Pacific Ocean off the coast of San Diego, where recovery teams from NASA and the Department of Defense will be on hand to retrieve the crew.

Artemis II follows the uncrewed Artemis I mission and is a crucial test of NASA’s deep-space systems before astronauts attempt a lunar landing on a future flight. NASA emphasizes that this mission is a key step toward long-term lunar exploration and eventual crewed missions to Mars, according to Fox News.

BioMarin Appoints Indian-American Arpit Davé as Chief Digital Officer

BioMarin Pharmaceutical Inc. has appointed Arpit Davé as its new Chief Digital and Information Officer, tasked with enhancing the company’s technology strategy and digital transformation efforts.

BioMarin Pharmaceutical Inc., a prominent global biotechnology firm specializing in rare diseases, has announced the appointment of Arpit Davé as Executive Vice President and Chief Digital and Information Officer. This newly created position underscores the company’s commitment to advancing its enterprise technology strategy.

In his role, Davé will focus on reimagining and executing BioMarin’s technology initiatives, data science, and digital transformation efforts. His leadership is expected to create significant value for patients, employees, and shareholders, as stated by the San Rafael, California-based company.

With over 20 years of experience in information technology and artificial intelligence (AI) within the biopharmaceutical sector, Davé is recognized as a proven leader. His career has been marked by a strong track record of driving patient-centered organizations toward measurable business growth and profitability.

Before joining BioMarin, Davé served as a technology executive at Amgen, Inc. for the past seven and a half years. In his most recent roles there, he led teams focused on digital transformation through AI and innovative digital technologies, positioning the company to remain competitive in an evolving landscape.

Davé’s previous experience includes leadership roles at Bristol Myers Squibb and Merck, where he concentrated on CIO leadership, data science, and research and development.

He holds a Master of Science in Industrial Engineering from the University of Texas and a Bachelor of Science in Mechanical Engineering from Sardar Patel University in Gujarat, India.

“Arpit is a visionary thinker and talented leader who brings to this role a deep understanding of the biopharmaceutical industry and a track record of using technology and AI to deliver for patients and the business,” said Alexander Hardy, President and Chief Executive Officer of BioMarin.

Hardy emphasized that Davé will be responsible for building a strategic vision and roadmap, deploying technologies that will enhance and differentiate BioMarin’s operations across various sectors, including research and development, manufacturing, and commercial organizations.

Expressing his enthusiasm for the new role, Davé stated, “I have long admired BioMarin’s dedication to people living with rare diseases, and I am excited to work as part of this team to create undeniable value for patients, employees, and shareholders.”

He further added, “I am honored to join BioMarin at this pivotal moment where the convergence of biology, data, and AI offers unprecedented potential; my focus will be on empowering our world-class teams and driving innovation to translate these capabilities into faster insights and the accelerated delivery of life-changing therapies to the patients who depend on us.”

Founded in 1997, BioMarin has established a strong reputation for innovation, boasting eight commercial therapies and a robust clinical and preclinical pipeline, according to the company’s release.

This strategic appointment reflects BioMarin’s ongoing commitment to leveraging technology and data to enhance its operations and improve patient outcomes.

According to The American Bazaar, Davé’s leadership is expected to play a crucial role in the company’s future endeavors.

CloudSEK Receives $10 Million Investment from Connecticut Innovations

CloudSEK, an Indian cybersecurity firm, has secured a $10 million investment from Connecticut Innovations, marking a significant milestone as the first Indian-origin cybersecurity company to receive funding from a U.S. state fund.

CloudSEK, a Bengaluru-based cybersecurity firm specializing in predictive cyber threat intelligence, has announced a strategic investment of $10 million from Connecticut Innovations (CI), the venture capital arm of the State of Connecticut. This funding is part of the company’s Series B2 round and positions CloudSEK as the first Indian-origin cybersecurity company to receive backing from a U.S. state-backed venture fund.

The investment follows CloudSEK’s previous fundraising efforts, which included $19 million raised in its Series B1 round. With the completion of this latest tranche, the company has successfully concluded its Series B funding round, which consists of both primary and secondary capital.

“Becoming the first Indian-origin cybersecurity company to receive backing from a U.S. state fund is a milestone for CloudSEK, as well as for the entire Indian cybersecurity ecosystem,” said Rahul Sasi, co-founder and CEO of CloudSEK. He emphasized the significance of this achievement for the company and the broader Indian cybersecurity landscape.

With Connecticut serving as its U.S. anchor, CloudSEK is committed to job creation, localized research investment, and enhancing cyber resilience in the Western world. Sasi expressed pride in advancing the company’s identity as a truly Indo-American cybersecurity firm.

The partnership with CI was established after CloudSEK distinguished itself as a leading startup at VentureClash, CI’s global investment pitch competition. “At our 2025 VentureClash India pitch event, CloudSEK distinguished itself as a truly innovative provider of cybersecurity and predictive threat capabilities used by hundreds of businesses around the world,” stated Alison Malloy, Managing Director of Investments at Connecticut Innovations.

CloudSEK plans to utilize this investment to accelerate its expansion in the U.S., with intentions to establish a regional hub for operations, talent, and partnerships in Connecticut. The company aims to onboard strategic local talent and forge collaborations with corporate partners, universities, and research institutions throughout the state.

The funding from CI will enable CloudSEK to recruit top-tier cybersecurity and AI talent from the region, establish partnerships with local academic and research institutions, build its U.S. headquarters in Connecticut, and drive region-specific cybersecurity research and innovation.

This landmark investment not only enhances CloudSEK’s global trajectory but also symbolizes the growing prominence of Indian cybersecurity innovation on the world stage. By solidifying its presence in Connecticut and continuing to expand globally, CloudSEK is well-positioned to bolster cyber resilience across continents and redefine cross-border technology collaboration.

Prior to this investment round, CloudSEK’s Series B1 was led by U.S.-based strategic investor Commvault, with participation from MassMutual Ventures, Inflexor Ventures, Prana Ventures, and Tenacity Ventures. Early investors, including the Meeran Family (Eastern Group), StartupXSeed, Neon Fund, and Exfinity Ventures, continue to support the company’s long-term growth.

In addition to this funding, CloudSEK recently announced a strategic partnership with Seed Group, a company of The Private Office of Sheikh Saeed bin Ahmed Al Maktoum, aimed at delivering predictive cyber intelligence and AI-attack detection capabilities to organizations across the UAE.

Founded in 2015 by Sasi, a cybersecurity researcher-turned-entrepreneur, CloudSEK has evolved from a research-first initiative into a leading cyber threat intelligence platform, serving over 300 enterprises across various sectors, including banking, financial services, insurance (BFSI), healthcare, technology, and government.

This investment marks a pivotal moment for CloudSEK and highlights the increasing collaboration between Indian tech firms and U.S. state-backed initiatives, paving the way for future innovations in the cybersecurity domain, according to The American Bazaar.

Robots Designed to Feel Pain Show Faster Reactions Than Humans

Scientists have developed a neuromorphic robotic e-skin that enables robots to detect harmful contact and react faster than humans, enhancing safety and interaction in various environments.

Touch something hot, and your hand instinctively pulls back before your brain even registers the pain. This rapid response is crucial in preventing injury. In humans, sensory nerves send immediate signals to the spinal cord, which triggers muscle reflexes. However, most robots currently lack this quick reaction capability. When a humanoid robot encounters something harmful, sensor data typically travels to a central processor, where it is analyzed before instructions are sent back to the motors. This delay can lead to broken parts or dangerous situations, particularly as robots become more integrated into homes, hospitals, and workplaces.

To address this challenge, scientists at the Chinese Academy of Sciences, along with collaborating universities, have developed a neuromorphic robotic e-skin, or NRE-skin. Unlike traditional robotic skins that merely detect touch, this innovative e-skin mimics the human nervous system, allowing robots to sense both contact and potential harm.

The e-skin consists of four layers that replicate the structure and function of human skin and nerves. The outermost layer serves as a protective covering, akin to the epidermis. Beneath this layer, sensors and circuits function like sensory nerves, continuously sending small electrical pulses to the robot every 75 to 150 seconds to confirm that everything is functioning normally. If the skin is damaged, this pulse ceases, alerting the robot to the injury’s location.

When the e-skin experiences normal contact, it sends neural-like spikes to the robot’s central processor for interpretation. However, if the pressure exceeds a predetermined threshold, the skin generates a high-voltage spike that bypasses the central processor and goes directly to the motors. This allows the robot to react instantly, pulling its arm away in a reflexive manner, similar to a human’s response to touching a hot surface. The pain signal is only activated when the contact is genuinely dangerous, preventing unnecessary overreactions.

This local reflex system not only reduces the risk of damage but also enhances safety and makes interactions with robots feel more natural. The e-skin’s design incorporates modular magnetic patches that can be easily replaced. If a section of the skin is damaged, an owner can simply remove the affected patch and snap in a new one, eliminating the need to replace the entire surface. This modular approach saves time, reduces costs, and extends the operational lifespan of robots.

As service robots increasingly work in close proximity to people, such as assisting patients or helping older adults, the ability to sense touch, pain, and injury becomes vital. This heightened awareness fosters trust and minimizes the risk of accidents caused by delayed reactions or sensor overload. The research team emphasizes that their neural-inspired design significantly improves robotic touch, safety, and intuitive human-robot interaction, marking a crucial step toward creating robots that behave more like responsive partners rather than mere machines.

The next challenge for researchers is to enhance the e-skin’s sensitivity, enabling it to recognize multiple simultaneous touches without confusion. If successful, this advancement could allow robots to perform complex physical tasks while remaining vigilant to potential dangers across their entire surface, bringing humanoid robots closer to instinctual behavior.

While the idea of robots that can feel pain may initially seem unsettling, it ultimately serves the purpose of protection, speed, and safety. By emulating the human nervous system, scientists are equipping robots with faster reflexes and improved judgment in the physical world. As robots become more integrated into daily life, these instinctual capabilities could prove to be transformative.

Would you feel more at ease around a robot capable of sensing pain and reacting instantly, or does this concept raise new concerns for you? Share your thoughts with us at Cyberguy.com.

According to CyberGuy, the development of this technology represents a significant leap forward in robotic capabilities.

Walmart Appoints Indian-American Shishir Mehrotra to Company Board

Walmart has appointed Shishir Mehrotra, CEO of Superhuman, to its Board of Directors as the retail giant prepares for an agentic AI future.

Walmart Inc. has announced the appointment of Shishir Mehrotra, an Indian American technology veteran and current CEO of Superhuman, to its Board of Directors. This move comes as the retail giant positions itself for an agentic AI future.

Mehrotra will contribute to both the Compensation and Management Development Committee and the Technology and eCommerce Committee, as stated by the Bentonville, Arkansas-based company.

Greg Penner, chairman of Walmart’s Board of Directors, expressed enthusiasm about Mehrotra’s addition, saying, “Our focus remains on serving customers through a people-led, tech-powered approach. Shishir’s background adds to our boardroom the insight of a proven builder, offering a distinguished track record scaling platforms relied upon by millions.”

Randall Stephenson, the lead independent director, echoed this sentiment, highlighting Mehrotra’s unique skill set. “Shishir brings a rare combination of technical depth and product leadership. He has helped create and scale platforms that unlock creativity and productivity for people and teams at global scale. We’re excited to welcome him to our Board,” he remarked.

In response to his appointment, Mehrotra stated, “I have long admired Walmart’s ability to innovate while staying true to its core values, and joining the Board as the company builds for an agentic AI future is a rare opportunity. This era is the most significant technological shift I’ve seen in my career, and I look forward to working with the team to shape the future for the millions of people Walmart serves.”

Mehrotra brings over 25 years of experience in the technology sector, with a proven track record of building category-defining platforms. Before his role at Superhuman, an email application designed for productivity enhancement, he co-founded Coda, a productivity and AI platform that successfully served millions of users and tens of thousands of teams.

Prior to founding Coda, Mehrotra held significant positions at YouTube, serving as both Chief Product Officer and Chief Technology Officer. During his tenure, he played a crucial role in transforming YouTube into the world’s largest video platform and one of Google’s most significant and rapidly growing businesses, catering to a new generation of creators.

Mehrotra holds a dual Bachelor of Science degree in mathematics and computer science from the Massachusetts Institute of Technology.

Walmart serves approximately 270 million customers and members each week across more than 10,750 stores and various eCommerce websites in 19 countries. The company reported a fiscal year 2025 revenue of $681 billion and employs around 2.1 million associates globally, according to the company’s release.

This strategic appointment reflects Walmart’s commitment to integrating advanced technology into its operations and enhancing customer service as it navigates the evolving landscape of retail.

According to The American Bazaar, Mehrotra’s expertise will be invaluable as Walmart continues to innovate and adapt in a rapidly changing market.

Jumio Appoints Indian-American Bala Kumar as President and Interim CEO

Jumio has appointed Bala Kumar as president and interim CEO, focusing on eradicating identity theft while enhancing digital interactions as the company prepares for its next phase of growth.

Jumio, a prominent provider of AI-powered identity intelligence solutions, has announced the appointment of Indian American executive Bala Kumar as its president and interim chief executive officer. This leadership change comes as the company aims to strengthen its position in a rapidly evolving market.

Kumar, who holds a master’s degree in Computer Applications from the National Institute of Technology Karnataka and has completed the Harvard Leadership Direct program, takes over from Robert Prigge. Prigge has led the company for nearly a decade and is departing to pursue new opportunities.

The transition in leadership is described by Jumio as a planned evolution, designed to ensure continuity and effective execution as the company embarks on its next phase of expansion. The firm is focused on maintaining its momentum in the identity verification and biometrics market.

Having joined Jumio in 2021, Kumar previously served as the chief product and technology officer. In this capacity, he successfully expanded Jumio’s offerings from a single product to a comprehensive portfolio of identity intelligence solutions, addressing the evolving needs of customers. He will continue to guide the company’s product vision and innovation.

Ben Cukier, co-chairman of Jumio’s board of directors, expressed confidence in Kumar’s capabilities. “This transition reflects the strength of our leadership bench and the company’s focus on disciplined execution,” Cukier stated. “With deep institutional knowledge and a proven track record of delivering results, Bala is exceptionally well-positioned to lead the company with full authority during this period while we conduct a thoughtful search for a CEO to fuel the next phase of Jumio’s growth.”

Kumar expressed his enthusiasm for his new role, stating, “I am honored to step into this role. We have a strong foundation, a clear strategy, and an incredibly talented team. My focus is on executing our strategy in service of our customers and Jumio’s core mission: eradicating identity theft while enabling trusted, low-friction digital interactions for consumers and businesses both now and in the future.”

The Jumio Platform offers AI-powered identity intelligence that integrates biometric authentication, automation, and data-driven insights. This technology is designed to accurately establish, maintain, and reassert trust throughout the customer journey, from account opening to ongoing monitoring.

Utilizing advanced automated technology, including biometric screening, AI and machine learning, liveness detection, and no-code orchestration with hundreds of data sources, Jumio aims to combat fraud and financial crime. The platform also facilitates faster customer onboarding and ensures compliance with regulatory requirements, including Know Your Customer (KYC) and Anti-Money Laundering (AML) standards.

With a global presence that includes offices in North America, Latin America, Europe, Asia Pacific, and the Middle East, Jumio has processed over one billion transactions across more than 200 countries and territories, encompassing real-time web and mobile transactions.

This strategic appointment of Bala Kumar as president and interim CEO marks a significant step for Jumio as it continues to innovate and lead in the identity verification space, ensuring a secure digital environment for businesses and consumers alike.

According to The American Bazaar, this leadership change positions Jumio for continued growth and success in the identity intelligence sector.

Laurent Simons: The Controversial Journey of a Child Prodigy in Human Enhancement

Laurent Simons, a Belgian prodigy who earned his PhD at 15, is now navigating the contentious field of human enhancement through artificial intelligence and medical science.

A doctoral degree earned at the age of 15 is not, by itself, a scientific breakthrough. It is a personal milestone—rare and extraordinary—often framed as a story of exceptional intellect rather than institutional transformation. However, when such an achievement is followed by an explicit ambition to reshape human biology through artificial intelligence, the narrative shifts from mere curiosity to significant consequence.

This is the case with Laurent Simons, a Belgian prodigy whose academic trajectory has unfolded at an unprecedented pace. Having completed high school by the age of eight, Simons went on to obtain both a bachelor’s and a master’s degree in physics in under two years. In late 2025, at just 15 years old, he formally defended his PhD in theoretical quantum physics at the University of Antwerp—through standard academic channels, under conventional supervision, and without honorary acceleration.

The credentials are verifiable. The thesis exists, the defense was public, and the institution is accredited. Yet Simons’ next move—venturing into medical science and artificial intelligence with the stated aim of “creating superhumans”—has placed him at the edge of some of the most contentious debates in modern science.

Simons’ doctoral dissertation, titled “Bose polarons in superfluids and supersolids,” examined the behavior of impurity particles within Bose–Einstein condensates—states of matter formed when atoms are cooled to near absolute zero, causing quantum effects to emerge on a macroscopic scale.

This area of condensed matter physics has implications for quantum simulation, low-temperature systems, and many-body interactions. According to documentation released by the University of Antwerp, Simons satisfied all academic and research requirements associated with the degree.

As part of his doctoral work, he also completed an internship at the Max Planck Institute for Quantum Optics, contributing to research on quasiparticle interactions in ultracold atomic environments. These institutions have not challenged the legitimacy of his academic record, and while the speed of his progress remains extraordinary, the process itself was conventional.

Immediately following his doctoral defense, Simons relocated to Munich to begin a second PhD program—this time in medical science, with a focus on artificial intelligence. This shift marks a departure from abstract quantum modeling into applied biological and computational research.

In a televised interview with Belgian broadcaster VTM, Simons articulated his long-term ambition in unusually direct terms. “After this, I’ll start working towards my goal: creating superhumans,” he stated.

Earlier reporting by The Brussels Times noted that Simons has discussed defeating aging since the age of 11, framing longevity as both a scientific and moral imperative. While details of his current research remain undisclosed, available information suggests that his work is concentrated on conceptual and computational models rather than laboratory-based biomedical experimentation. Areas of interest reportedly include AI-driven diagnostics, regenerative medicine frameworks, and lifespan modeling.

At this stage, there is no public evidence that Simons is involved in clinical trials or human-subject research.

Simons’ ambitions align with a rapidly expanding research landscape focused on human longevity and biological optimization. Well-funded private ventures such as Altos Labs and Calico Life Sciences are investigating cellular reprogramming, senolytics, and genetic pathways associated with aging and disease resistance.

At the academic level, journals such as Nature Aging and Cell Reports Medicine continue to publish work on machine-learning-based disease detection, gene expression analysis, and tissue regeneration. Yet much of this research remains exploratory, and the practical limits of “enhancement” remain undefined.

What distinguishes Simons is not merely his age, but the unusual bridge he is attempting to cross. Transitions from theoretical quantum physics into applied medical science are rare, particularly at the doctoral level, where disciplinary depth typically outweighs breadth.

The notion of engineering “superhumans” lacks scientific consensus and ethical clarity. According to the Stanford Encyclopedia of Philosophy, debates surrounding human enhancement revolve around whether interventions are therapeutic, elective, or fundamentally transformational.

At present, there is no indication that Simons’ research violates existing ethical frameworks. His academic affiliations have not publicly raised concerns, and his work appears to fall within early-stage theoretical exploration.

Nevertheless, the convergence of artificial intelligence, medicine, and long-term biological redesign presents governance challenges. Questions of supervision, peer review, and interdisciplinary oversight are still being negotiated across the field. The involvement of a researcher below the age of legal adulthood introduces further complexity.

For now, Laurent Simons represents neither a scientific revolution nor a regulatory failure. He is, instead, a data point at the frontier—where exceptional individual capability intersects with emerging technologies whose implications remain unresolved.

Whether his ambitions lead to meaningful breakthroughs or remain aspirational will depend not on speed, but on scrutiny, according to The Brussels Times.

Indian-American Students Develop Health Insurance Decision-Making Tool

Indian American students Sunveer Chugh and Dev Gupta have developed a digital tool, InsuraBridge, to assist consumers in making informed health insurance decisions.

Sunveer Chugh and Dev Gupta, two Indian American undergraduates at Case Western Reserve University in Cleveland, Ohio, have created a digital tool designed to help consumers navigate the complexities of health insurance purchasing on healthcare.gov.

The innovative tool, named InsuraBridge, aims to simplify the process of understanding critical aspects of health insurance plans, such as out-of-pocket maximums and in-network doctors, according to a university press release.

Chugh, a computer science major, and Gupta, who studies quantitative economics and healthcare management, recently showcased their startup at the Consumer Electronics Show (CES) in Las Vegas, one of the largest technology events in the world.

Gupta highlighted the challenge many consumers face, stating, “Millions of people buy insurance through healthcare exchanges, but there can be hundreds of plan options. Even for tech-savvy consumers, it’s nearly impossible to know which one is right for you.”

InsuraBridge employs advanced analytics to evaluate users’ preferences, including cost sensitivity, preferred doctors, and anticipated healthcare needs. The tool then provides tailored plan recommendations based on these assessments. This technology is built on a patented algorithm and utilizes an application programming interface (API) connected to healthcare exchanges.

“Think of it as a digital co-pilot for choosing insurance,” Chugh explained. “We want to give people clarity and confidence in a process that’s usually overwhelming.”

The duo presented their prototype at CES 2026’s University Innovations section, joining hundreds of emerging founders from around the globe.

Gupta emphasized their mission, saying, “Our goal is to make health insurance transparent, thus ensuring access, establishing care, and expanding medicine.” Chugh added, “If we can help people make better choices for their health and finances, that’s a win.”

Looking ahead, InsuraBridge is preparing to launch a new Medicaid application tool. This tool aims to streamline workflows by consolidating patient information and autocompleting applications in just minutes, significantly reducing the time typically required for the process.

Ray Herschman, an adjunct professor at the Weatherhead School of Management, and Mark Votruba, an associate professor at the same institution, have been instrumental in guiding the students throughout the development of their digital tool.

Herschman noted that InsuraBridge exemplifies the university’s commitment to innovation and social impact. “These students saw a problem that affects millions and used technology to fix it,” he said. “The InsuraBridge application connects to the Healthcare.gov website’s API to access key data that powers the healthcare exchange’s health plan options and associated benefit and provider network attributes, empowering consumers to make informed decisions.”

As the healthcare landscape continues to evolve, tools like InsuraBridge may play a crucial role in helping consumers navigate their options and make informed choices about their health insurance.

According to Case Western Reserve University, the development of such innovative solutions reflects a growing trend among students to address real-world challenges through technology.

Why January Is the Ideal Time to Remove Personal Data Online

January is a crucial month for online privacy, as scammers refresh their target lists, making it the ideal time to remove personal data from the internet.

As the new year begins, many people take the opportunity to reset their lives—setting new goals, organizing their spaces, and cleaning out their inboxes. However, it’s not just individuals who are hitting the reset button; scammers are doing the same, particularly when it comes to personal data.

January marks a significant period for online privacy, as data brokers refresh their profiles and scammers rebuild their target lists. This means that the longer your personal information remains online, the more comprehensive and valuable your profile becomes to those looking to exploit it.

To combat this growing threat, institutions such as the U.S. Department of the Treasury have issued advisories urging individuals to remain vigilant and take proactive measures against data-related scams. By acting early in the year, you can significantly reduce the likelihood of falling victim to scams, lower the risk of identity theft, and limit unwanted exposure throughout the year.

Many people mistakenly believe that outdated information becomes irrelevant over time. Unfortunately, this is not the case with data brokers. These entities do not merely store a static snapshot of who you are; they create dynamic profiles that evolve over time, incorporating new data points such as:

Each year adds another layer to your profile—a new address, a changed phone number, or even a family connection. While a single data point may seem insignificant, together they form a detailed identity profile that scammers can use to impersonate you convincingly. Therefore, delaying action only exacerbates the problem.

Scammers do not target individuals randomly; they work from organized lists. At the start of the year, these lists are refreshed, akin to a spring cleaning for criminals who are preparing to exploit identities for the next twelve months. Once your profile is flagged as responsive or profitable, it often remains in circulation.

Removing your data early is not just about preventing immediate scams; it is about disrupting the supply chain that fuels these criminal activities. When your information is eliminated from data broker databases, it has a compounding effect. The fewer lists you appear on in January, the less likely your data will be reused, resold, or recycled throughout the year. This is why it is essential to address data exposure proactively rather than reactively.

January is particularly critical for retirees and families, who are often more susceptible to fraud, scams, and other crimes. Scammers are aware of this and prioritize households with established financial histories early in the year.

Many individuals attempt to start fresh in January by taking various steps, such as:

While these actions are beneficial, they do not eliminate your data from broker databases. Credit monitoring services can alert you after a problem has occurred, password changes do not affect public profiles, and unsubscribing does not prevent data resale. If your personal information remains in numerous databases, scammers can easily locate you.

If you want to minimize scam attempts throughout the year, the most effective strategy is to remove your personal data at the source. You can achieve this in one of two ways: by submitting removal requests yourself or by employing a professional data removal service to handle the process for you.

Manually removing your data involves identifying dozens or even hundreds of data broker websites, locating their opt-out forms, and submitting removal requests one by one. This method requires verifying your identity, tracking responses, and repeating the process whenever your information resurfaces. While effective, it demands considerable time, organization, and ongoing follow-up.

On the other hand, a data removal service can manage this process on your behalf. These services typically:

Given the sensitive nature of personal information, it is crucial to select a data removal service that adheres to strict security standards and employs verified removal methods. While no service can guarantee complete removal of your data from the internet, utilizing a data removal service is a prudent choice. Although these services may come at a cost, they handle the work for you by actively monitoring and systematically erasing your personal information from numerous websites. This approach provides peace of mind and has proven to be the most effective way to safeguard your personal data.

By limiting the information available online, you reduce the risk of scammers cross-referencing data from breaches with information they may find on the dark web, making it more challenging for them to target you.

As January unfolds, it is essential to recognize that scammers do not wait for mistakes; they wait for exposed data. This month is when profiles are refreshed, lists are rebuilt, and targets are selected for the year ahead. The longer your personal information remains online, the more complete—and dangerous—your digital profile becomes.

The good news is that you can break this cycle. Removing your data now can reduce scam attempts, protect your identity, and lead to a quieter, safer year ahead. If you are going to make one privacy move this year, make it early—and make it count.

Have you ever been surprised by how much of your personal information was already online? Share your experiences with us at Cyberguy.com.

For more information on data removal services and to check if your personal information is already available online, visit Cyberguy.com.

According to CyberGuy.com, taking proactive steps in January can significantly enhance your online privacy and security.

Meta Partners with Three Companies for Nuclear Power Initiatives

Meta has entered into 20-year agreements to purchase power from three Vistra nuclear plants and collaborate on small modular reactor projects with two companies.

Meta announced on Friday that it has secured 20-year agreements to purchase power from three nuclear plants operated by Vistra Energy. The company also plans to collaborate with two firms focused on developing small modular reactors (SMRs).

According to Meta, the power purchase agreements will involve Vistra’s Perry and Davis-Besse plants in Ohio, as well as the Beaver Valley plant in Pennsylvania. These agreements are expected to facilitate financial support for the expansion of the Ohio facilities while extending their operational lifespan. The plants are currently licensed to operate until at least 2036, with one of the reactors at Beaver Valley licensed to run through 2047.

In addition to the power agreements, Meta will assist in the development of small modular reactors being planned by Oklo and TerraPower. Proponents of SMRs argue that these reactors could ultimately reduce costs, as they can be manufactured in factories rather than constructed on-site. However, some industry experts remain skeptical about whether SMRs can achieve the same economies of scale as traditional large reactors. Currently, there are no commercial SMRs operating in the United States, and the proposed plants will require regulatory permits before construction can begin.

Joel Kaplan, Meta’s chief global affairs officer, emphasized the significance of these agreements, stating that they, along with a previous agreement with Constellation to maintain an Illinois reactor’s operation for another 20 years, position Meta as one of the largest corporate purchasers of nuclear energy in U.S. history.

Meta’s agreements are projected to provide up to 6.6 gigawatts of nuclear power by 2035. The company will also help fund the development of two reactors by TerraPower, which are expected to generate up to 690 megawatts of power as early as 2032. This partnership grants Meta rights to energy from up to six additional TerraPower reactors by 2035. Chris Levesque, President and CEO of TerraPower, noted that this agreement will facilitate the rapid deployment of new reactors.

The trend of tech companies investing in nuclear energy has been gaining momentum. Last October, both Amazon and Google announced plans to invest in the development of small nuclear reactors, a technology that is still in its nascent stages. These initiatives aim to address the high costs and lengthy construction timelines that have historically hindered new reactor projects in the U.S.

Meta, along with other major tech firms such as Amazon and Google, has signed the Large Energy Consumers Pledge, committing to help triple the nation’s nuclear energy output by 2050. As these companies expand their artificial intelligence centers, they are becoming significant contributors to the increasing energy demands in the United States. Other notable organizations, including Occidental and IHI Corp, have also joined this initiative, indicating widespread corporate support for the nation’s nuclear energy goals.

As the energy landscape continues to evolve, Meta’s strategic investments in nuclear power reflect a growing recognition of the role that nuclear energy can play in meeting future energy needs.

According to The American Bazaar, these developments highlight a broader trend among tech companies to embrace nuclear energy as a sustainable solution to rising energy demands.

Health Tech Innovations Highlighted at CES 2026

Innovations showcased at CES 2026 are transforming health technology, featuring AI-driven devices aimed at enhancing wellness, mobility, and safety.

The Consumer Electronics Show (CES) 2026 is currently taking place in Las Vegas, showcasing the latest advancements in consumer technology. This annual event, which spans four days every January, attracts tech companies, startups, researchers, investors, and journalists from around the globe. CES serves as a preview for products that could soon find their way into homes, hospitals, gyms, and workplaces.

This year, while flashy gadgets and robots capture attention, health technology is at the forefront, with a focus on prevention, recovery, mobility, and long-term well-being. Here are some standout health tech products that have garnered significant interest at CES 2026.

NuraLogix has introduced a groundbreaking smart mirror that transforms a brief selfie video into a comprehensive overview of an individual’s long-term health. The Longevity Mirror uses artificial intelligence to analyze subtle blood flow patterns in the user’s face, providing scores for metabolic health, heart health, and physiological age on a scale from zero to 100. Results are delivered in approximately 30 seconds, accompanied by clear explanations and recommendations. The AI system has been trained on hundreds of thousands of patient records, allowing it to convert raw data into understandable insights. The mirror supports up to six user profiles and is set to launch in early 2026 for $899, which includes a one-year subscription. Subsequent annual subscriptions will cost $99, with optional concierge support available to connect users with nutrition and wellness experts.

Ascentiz showcased its H1 Pro walking exoskeleton, which emphasizes real-world mobility applications. This lightweight, modular device is designed to reduce strain while providing motor-assisted movement over longer distances. The system employs AI to adapt assistance based on the user’s motion and terrain, making it effective on inclines and uneven surfaces. Its compact design features a belt-based attachment system, and its dust- and water-resistant construction allows for outdoor use in various conditions. Ascentiz also offers more powerful models, including Ultra and knee or hip-attached versions, demonstrating the shift of exoskeletons from clinical rehabilitation to everyday mobility support.

Cosmo Robotics received a CES Innovation Award for its Bambini Kids exoskeleton, the first overground pediatric exoskeleton with powered ankle motion. Designed for children aged 2.5 to 7 with congenital or acquired neurological disorders, this system offers both active and passive gait training modes. By encouraging guided and natural movement, it helps children relearn walking skills while minimizing complications associated with conditions like cerebral palsy.

For those who spend significant time indoors, the Sunbooster device offers a practical solution for replacing the benefits of natural sunlight. This innovative product clips onto a monitor, laptop, or tablet, projecting near-infrared light while users work, without causing noise or disruption. Near-infrared light, a natural component of sunlight, is associated with improved energy levels, mood, and skin health. Sunbooster utilizes patented SunLED technology to deliver controlled exposure and tracks daily dosage, encouraging two to four hours of use during screen time. The technology has been validated through human and laboratory studies conducted at the University of Groningen and Maastricht University, providing scientific support for its claims. The company is also developing a phone case and a monitor with built-in near-infrared lighting to further enhance indoor sunlight replacement.

Allergen Alert addresses the challenges of dining out with food allergies. This handheld device tests small food samples inside a sealed, single-use pouch, detecting allergens or gluten in meals within minutes. Built on laboratory-grade technology derived from bioMérieux expertise, the system automates the analytical process, delivering results without requiring technical knowledge. Allergen Alert aims to restore confidence and inclusion at the dining table, with plans for pre-orders at the end of 2026 and future expansions to test additional common allergens.

Samsung previewed its Brain Health feature for Galaxy wearables, a research-driven tool that analyzes walking patterns, voice changes, and sleep data to identify potential early signs of cognitive decline. This system leverages data from devices like the Galaxy Watch and Galaxy Ring to establish a personal baseline, monitoring for subtle deviations linked to early dementia. Samsung emphasizes that Brain Health is not intended to diagnose medical conditions but rather to provide early warnings that encourage users and their families to seek professional evaluations sooner. While a public release date has not been confirmed, CES 2026 attendees can experience an in-person demo of the feature.

Withings is redefining the capabilities of bathroom scales with its BodyScan 2, which has earned a CES 2026 Innovation Award. In less than 90 seconds, this smart scale measures ECG data, arterial stiffness, metabolic efficiency, and hypertension risk. The connected app allows users to observe how factors like stress, sedentary habits, menopause, or weight changes impact their cardiometabolic health, shifting the focus from weight alone to early health indicators that can be tracked over time.

Garmin received a CES Innovation Honoree Award for its Venu 4 smartwatch, which features a new health status indicator that highlights when metrics such as heart rate variability and respiration deviate from personal baselines. The watch also includes lifestyle logging, linking daily habits to sleep and stress outcomes, and boasts up to 12 days of battery life for continuous tracking without nightly charging.

Ring introduced Fire Watch, an opt-in feature that utilizes AI to detect smoke and flames from compatible cameras. During wildfires, users can share snapshots with Watch Duty, a nonprofit organization that distributes real-time fire alerts to communities and authorities, demonstrating how existing home technology can enhance public safety during environmental emergencies.

Finally, the RheoFit A1 may be the most relaxing health gadget at CES 2026. This AI-powered robotic roller glides beneath the user’s body to deliver a full-body massage in about 10 minutes. With interchangeable massage attachments and activity-specific programs, it targets soreness from workouts or long hours spent at a desk. The companion app employs an AI body scan to automatically adjust pressure and focus areas.

CES 2026 highlights the evolution of health technology, making it more practical and personal. Many showcased products prioritize early problem detection, stress reduction, and informed health decision-making. As technology becomes increasingly integrated into daily life, these innovations promise to enhance safety and well-being.

Which of these health tech products from CES 2026 would you find most useful in your daily life? Share your thoughts with us at Cyberguy.com.

According to CyberGuy.com.

AI Workplace Competition: Analyzing Claude, Gemini, ChatGPT, and Others

Recent survey findings reveal that Anthropic’s Claude is the most popular AI tool among U.S. professionals, surpassing competitors like ChatGPT and Google’s Gemini.

In the rapidly evolving landscape of artificial intelligence, a new survey sheds light on the preferences of U.S. professionals regarding workplace AI tools. While major tech companies are eager to promote their proprietary AI solutions, it appears that users are making their choices based on performance rather than corporate allegiance.

Conducted by Blind, an anonymous professional community platform, the survey indicates that Claude, developed by Anthropic, has emerged as the most widely used AI model in corporate environments. Surprisingly, Claude has outperformed more established competitors, including ChatGPT and Google’s Gemini. According to the survey, 31.7% of respondents reported using Claude as their primary AI tool at work, regardless of their employer’s preferences.

The survey collected responses from verified U.S.-based professionals during December, with a significant number identifying as software engineers. Participants sought AI assistance across various tasks, including debugging, system design, documentation, and content generation.

Despite Claude’s leading position, the survey reveals a more complex reality: professionals are not committing to a single AI model. Instead, many are curating personalized toolkits tailored to their specific needs. Vasudha Badri Paul, founder of Avatara AI, shared her experience, stating that her daily workflow involves multiple platforms. “I use Perplexity and Notebook LLM most frequently. For research and learning, I go to Claude and Gemini, while ChatGPT is my go-to for content,” she explained. Paul also incorporates Notion AI for organization, Sora for short video generation, Canva Magic Studio for graphics, and Gamma for slide decks.

This trend reflects a pragmatic approach among users, who are increasingly willing to switch between tools rather than remain loyal to a single ecosystem.

When it comes to coding, Claude’s advantages become particularly pronounced. The survey indicates that among developers, Claude excels in software development tasks. Many respondents highlighted its capabilities in writing and understanding complex code, an area where company-backed tools often face resistance. The survey found that 19.6% of professionals use ChatGPT, while 15% rely on Gemini. GitHub Copilot is close behind with 14.2%, and another 11.5% reported using Cursor.

The survey also explored preferences within companies that have their own AI products. At Meta, for instance, 50.7% of surveyed employees indicated that Claude was their preferred AI model, while only 8.2% reported using Meta AI. A similar trend was observed among Microsoft employees, where 34.8% favored Claude, narrowly ahead of Copilot at 32.2%, with ChatGPT trailing at 18.3%.

One key takeaway from the survey is that corporate backing does not necessarily guarantee employee loyalty. In an era where productivity is increasingly driven by AI tools, professionals are prioritizing effectiveness over brand allegiance.

Nitin Kumar, an app developer and solutions manager, noted the shift in his own AI stack over the past year. He stated, “Claude is definitely the most superior for software development.” Kumar recently canceled his ChatGPT Plus subscription, citing a lack of utility. However, he acknowledged that the AI landscape is still evolving, adding, “Gemini 3 Pro changed the game completely for non-coding uses.” He believes that coding capabilities are now nearly on par with Claude Opus 4.5.

Kumar’s insights reflect a broader trend of users experimenting with different tools and comparing version upgrades to find the best fit for their needs.

Interestingly, Google employees showed the strongest internal alignment, with 57.6% of those surveyed using Gemini as their primary AI model. However, this preference did not extend beyond Google’s offices, as only 11.6% of Amazon employees selected Gemini as their top choice. Amazon’s own AI tools, such as Amazon CodeWhisperer, received minimal traction, with just 0.7% of respondents indicating they used it.

Ultimately, the survey highlights a significant shift in how professionals engage with AI. Rather than adopting tools based on corporate mandates or branding, workers are choosing solutions that demonstrably enhance their speed, accuracy, and overall output. While Claude currently leads the pack, its dominance may not be permanent, but it has certainly established a measure of trust among users for now.

According to Blind, the findings underscore the importance of user experience in the competitive AI landscape.

Ex-Amazon Executives Secure $15 Million for Spangle AI Startup

Spangle AI, a startup founded by former Amazon executives, has secured $15 million in Series A funding to enhance real-time, personalized shopping experiences for online retailers.

Spangle AI, a Seattle-based startup focused on revolutionizing online retail, has successfully raised $15 million in a Series A funding round. The investment was led by NewRoad Capital Partners, with participation from Madrona, DNX Ventures, Streamlined Ventures, and several angel investors. Following this funding, Spangle AI is now valued at $100 million.

Founded in 2022 by a team of former Amazon executives, Spangle AI aims to create customized shopping experiences in real-time. The platform can generate tailored storefronts for individual customers by analyzing traffic from various sources, including social media, AI search tools, and autonomous shopping agents.

Spangle AI is addressing a significant shift in e-commerce, moving away from traditional methods that cater primarily to customers visiting a brand’s website directly. “The problem is that websites are not designed to continue a journey that originated somewhere else,” said Spangle CEO Maju Kuruvilla, who previously served as a vice president at Amazon, where he was involved in Prime logistics and fulfillment.

Fei Wang, Spangle’s CTO and a former Principal Engineer at Amazon, emphasized the limitations of existing e-commerce systems. “Having built unified AI systems at Amazon, including Alexa and customer service workflow automation at massive scale, we saw what’s broken in traditional e-commerce stacks: fragmented data, slow feedback cycles, and no intelligence layer tying it together,” Wang explained.

Unlike conventional approaches that rely heavily on user identity or historical data, Spangle’s system focuses on understanding customer intent and engagement. It is trained on a retailer’s catalog, brand guidelines, and performance metrics, allowing for a more contextual shopping experience.

Spangle AI’s innovative approach has attracted the attention of major fashion and retail brands, including EVOLVE, Steve Madden, and Alexander Wang. These partnerships have reportedly resulted in conversion rate increases of up to 50% and significant improvements in return on ad spend. In its first nine months, Spangle AI has secured nine enterprise customers, although the company has not disclosed specific revenue figures.

Kuruvilla noted that while e-commerce retailers excel at attracting customer interest, the challenge lies in converting that interest into sales. “Conversion from all this traffic that’s discovered outside is a huge problem for all these brands,” he stated.

Prior to founding Spangle AI, Kuruvilla was the CEO and CTO at Bolt, a controversial one-click checkout e-commerce startup that achieved a valuation of $11 billion. His extensive background also includes roles at Microsoft, Honeywell, and Milliman.

Fei Wang, who co-founded Spangle AI, previously served as CTO at Saks OFF 5TH, a subsidiary of Saks Fifth Avenue. He spent nearly 12 years at Amazon as an engineer. Yufeng Gou, the head of engineering at Spangle, also has a background at Saks OFF 5TH. Karen Moon, the company’s COO, is a seasoned investor and former CEO at Trendalytics.

As the e-commerce landscape continues to evolve, Spangle AI is positioning itself at the forefront of agentic commerce, leveraging its founders’ extensive experience to create a more seamless and personalized shopping experience for consumers.

The information in this article is based on reports from The American Bazaar.

Plastic Bottles May One Day Power Your Electronic Devices

Researchers have developed a method to transform discarded plastic bottles into supercapacitors, potentially powering electric vehicles and electronics within the next decade.

Every year, billions of single-use plastic bottles contribute to the growing waste crisis, ending up in landfills and oceans. However, a recent scientific breakthrough suggests that these discarded bottles could play a role in powering our daily lives.

Researchers have successfully created high-performance energy storage devices known as supercapacitors from waste polyethylene terephthalate (PET) plastic, commonly found in beverage containers. This innovative research, published in the journal Energy & Fuels and highlighted by the American Chemical Society, aims to reduce plastic pollution while advancing cleaner energy technologies.

According to the researchers, over 500 billion single-use PET plastic bottles are produced globally each year, with most being used once and then discarded. Lead researcher Dr. Yun Hang Hu emphasizes that this scale of production presents a significant environmental challenge. Instead of allowing this plastic to accumulate, the research team focused on upcycling it into valuable materials that can support renewable energy systems and reduce production costs.

Supercapacitors are devices that can charge quickly and deliver power instantly, making them ideal for applications in electric vehicles, solar power systems, and everyday electronics. Dr. Hu’s team discovered a method to manufacture these energy storage components using discarded PET plastic bottles. By reshaping the plastic at extremely high temperatures, they transformed waste into materials capable of generating electricity efficiently and repeatedly.

The process begins with cutting the PET bottles into tiny, grain-sized pieces. These pieces are then mixed with calcium hydroxide and heated to nearly 1,300 degrees Fahrenheit in a vacuum. This intense heat converts the plastic into a porous, electrically conductive carbon powder. The researchers then form this powder into thin electrode layers.

For the separator, small pieces of PET are flattened and perforated with hot needles to create a pattern that allows electric current to pass through efficiently while ensuring safety and durability. Once assembled, the supercapacitor consists of two carbon electrodes separated by the PET film and submerged in a potassium hydroxide electrolyte.

In testing, the all-waste-plastic supercapacitor outperformed similar devices made with traditional glass fiber separators. After repeated charging and discharging cycles, it retained 79 percent of its energy capacity, compared to 78 percent for a comparable glass fiber device. This slight advantage is significant; the PET-based design is cheaper to produce, fully recyclable, and supports circular energy storage technologies that reuse waste materials instead of discarding them.

This breakthrough could have a more immediate impact on everyday life than one might expect. The development of cheaper supercapacitors could lower the costs associated with electric vehicles, solar systems, and portable electronics. Faster charging times and longer lifespans for devices may soon follow. Furthermore, this research illustrates that sustainability does not necessitate sacrifices; waste plastics can become part of the solution rather than remaining a persistent problem.

While this technology is still under development, the research team is optimistic that PET-based supercapacitors could reach commercial markets within the next five to ten years. In the meantime, opting for reusable bottles and plastic-free alternatives remains a practical way to help reduce waste today.

Transforming waste into energy storage is not just an innovative idea; it demonstrates how science can address two pressing global challenges simultaneously. As plastic pollution continues to escalate, so does the demand for energy. This research shows that these issues do not need to be tackled in isolation. By reimagining waste as a resource, scientists are paving the way for a cleaner and more efficient future using materials we currently discard.

If your empty water bottle could one day help power your home or vehicle, would you still view it as trash? Let us know your thoughts by reaching out to us.

According to Fox News, this research highlights the potential of upcycling waste materials to create sustainable energy solutions.

Earth Prepares to Say Goodbye to ‘Mini Moon’ Asteroid Until 2055

Earth is set to bid farewell to a “mini moon” asteroid that has been in close proximity for the past two months, with plans for a return visit in 2055.

Earth is parting ways with an asteroid that has been accompanying it as a “mini moon” for the last two months. This harmless space rock is expected to drift away on Monday, influenced by the stronger gravitational pull of the sun.

However, the asteroid, designated 2024 PT5, will make a brief return visit in January. NASA plans to utilize a radar antenna to observe the 33-foot asteroid during this time, which will enhance scientists’ understanding of the object. It is believed that 2024 PT5 may be a boulder that was ejected from the moon due to an impact from a larger asteroid.

While NASA clarifies that this asteroid is not technically a moon—having never been fully captured by Earth’s gravity—it is still considered “an interesting object” worthy of scientific study. The asteroid was identified by astrophysicist brothers Raul and Carlos de la Fuente Marcos from Complutense University of Madrid, who have conducted hundreds of observations in collaboration with telescopes located in the Canary Islands.

Currently, the asteroid is more than 2 million miles away from Earth, making it too small and faint to be observed without a powerful telescope. In January, it will pass within approximately 1.1 million miles of Earth, maintaining a safe distance before continuing its journey deeper into the solar system. The asteroid is not expected to return until 2055, at which point it will be nearly five times farther away than the moon.

First detected in August, 2024 PT5 began its semi-orbital path around Earth in late September after being influenced by Earth’s gravity, following a horseshoe-shaped trajectory. By the time of its return next year, the asteroid will be traveling at more than double its speed from September, making it too fast to linger, according to Raul de la Fuente Marcos.

NASA plans to track the asteroid for over a week in January using the Goldstone solar system radar antenna located in California’s Mojave Desert, which is part of the Deep Space Network. Current data indicates that during its 2055 visit, this sun-orbiting asteroid will once again make a temporary and partial lap around Earth.

According to NASA, the study of such asteroids can provide valuable insights into the history and composition of celestial bodies in our solar system.

Musk’s Grok AI Chatbot Raises Concerns Over Inappropriate Images

Elon Musk’s AI chatbot Grok faces global backlash as concerns rise over the generation of sexualized images of women and children without consent, prompting investigations and demands for regulatory action.

Elon Musk’s artificial intelligence chatbot Grok is currently under intense scrutiny from governments around the world. Authorities in Europe, Asia, and Latin America have raised serious concerns regarding the creation and circulation of sexualized images of women and children generated without consent.

This backlash follows a troubling increase in explicit content linked to Grok Imagine, an AI-powered image generation feature integrated into Musk’s social media platform, X. Regulators are warning that the tool’s capacity to digitally alter real images using text prompts has exposed significant gaps in AI governance, which could lead to potentially irreversible harm, particularly affecting women and minors.

Countries including the United Kingdom, the European Union, France, India, Poland, Malaysia, and Brazil have either demanded immediate corrective action, initiated investigations, or threatened regulatory penalties. This situation signals what could become one of the most significant international confrontations regarding the misuse of generative AI to date.

Grok Imagine was launched last year, allowing users to create or modify images and videos through simple text commands. The tool features a “spicy mode” designed to permit adult content. While marketed as an edgy alternative to more restricted AI systems, critics argue that this positioning has encouraged misuse.

The controversy escalated recently when Grok reportedly began approving a large volume of user requests to alter images of individuals posted by others on X. Users could generate sexualized depictions by instructing the chatbot to digitally remove or modify clothing. Since Grok’s generated images are publicly displayed on the platform, altered content spread rapidly.

A recent analysis by digital watchdog AI Forensics reviewed 20,000 images generated over a one-week period and found that approximately 2% appeared to depict individuals who looked under 18. Many images showed young or very young-looking girls in bikinis or transparent clothing, raising urgent concerns about AI-enabled sexual exploitation.

Experts warn that such nudification tools blur the line between consensual creativity and non-consensual abuse, making regulation particularly challenging once content goes viral.

In response to media inquiries, Musk’s AI company, xAI, issued an automated message stating, “Legacy Media Lies.” While the company did not deny the existence of problematic Grok content, X maintained that it enforces rules against illegal material.

On its Safety account, the platform stated that it removes unlawful content, permanently suspends accounts, and cooperates with law enforcement when necessary. Musk echoed this sentiment, asserting, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

However, critics argue that enforcement after harm occurs does little to protect victims, especially when AI tools enable rapid and repeated abuse.

In the United Kingdom, Technology Secretary Liz Kendall described the content linked to Grok as “absolutely appalling” and demanded urgent intervention by X. “We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls,” Kendall stated.

The UK communications regulator Ofcom confirmed it has made urgent contact with both X and xAI to assess compliance with the Online Safety Act, which mandates platforms to prevent and remove child sexual abuse material once identified.

The European Commission has also taken a firm stance on the issue. Commission spokesman Thomas Regnier stated that officials are fully aware of Grok being used to generate explicit sexual content, including imagery resembling children. “This is not spicy. This is illegal. This is appalling. This is disgusting, and it has no place in Europe,” Regnier asserted.

EU officials noted that Grok had previously drawn attention for generating Holocaust-denial content, further raising concerns about the platform’s safeguards and oversight mechanisms.

In France, prosecutors have expanded an ongoing investigation into X to include sexually explicit AI-generated deepfakes. This move follows complaints from lawmakers and alerts from multiple government ministers. French authorities emphasized that crimes committed online carry the same legal consequences as those committed offline, stressing that AI does not exempt platforms or users from accountability.

India’s Ministry of Electronics and Information Technology issued a 72-hour ultimatum demanding that X remove all unlawful content and submit a detailed report on Grok’s governance and safety framework. The ministry accused the platform of enabling the “gross misuse” of artificial intelligence by allowing the creation of obscene and derogatory images of women. It warned that failure to comply could result in serious legal consequences, and the deadline has since passed without a public response.

In Poland, parliamentary speaker Włodzimierz Czarzasty cited Grok while advocating for stronger digital safety legislation to protect minors, describing the AI’s behavior as “undressing people digitally.”

Malaysia’s communications regulator confirmed investigations into users who violate laws against obscene content and stated it would summon representatives from X. In Brazil, federal lawmaker Erika Hilton filed complaints with prosecutors and the national data protection authority, calling for Grok’s AI image functions to be suspended during investigations. “The right to one’s image is individual,” Hilton stated. “It cannot be overridden by platform terms of use, and the mass distribution of sexualized images of women and children crosses all ethical and legal boundaries.”

The Grok controversy has reignited a global debate over the extent to which AI companies should be allowed to push boundaries in the name of innovation. Regulators argue that without strict safeguards, generative AI risks normalizing digital abuse on an unprecedented scale.

As governments consider fines, restrictions, and even feature bans, the outcome of this situation may set a lasting precedent for how AI systems are regulated worldwide, as well as how societies balance technological freedom with human dignity, according to Global Net News.

Interstellar Voyager 1 Resumes Operations After Communication Pause with NASA

Nasa’s Voyager 1 has resumed operations and communications after a temporary switch to a lower-power mode, allowing the spacecraft to continue its mission in interstellar space.

NASA has confirmed that Voyager 1 has regained its communication capabilities and resumed regular operations following a brief pause in late October. The spacecraft, which is currently located approximately 15.4 billion miles from Earth, experienced an unexpected shutdown of its primary radio transmitter, known as the X-band. In its place, Voyager 1 switched to its much weaker S-band transmitter, a mode that had not been utilized in over 40 years.

The communication link between NASA and Voyager 1 has been inconsistent, particularly during the period when the spacecraft was operating on the lower-band S-band. This switch hindered the Voyager mission team’s ability to download crucial science data and assess the spacecraft’s status.

Earlier this month, NASA engineers successfully reactivated the X-band transmitter, allowing for the collection of data from the four operational science instruments onboard Voyager 1. With communications restored, engineers are now focused on completing a few remaining tasks to return Voyager 1 to its pre-issue operational state. One of these tasks involves resetting the system that synchronizes the spacecraft’s three onboard computers.

The activation of the S-band was a result of Voyager 1’s fault protection system, which was triggered when engineers turned on a heater on the spacecraft. The system determined that the probe did not have sufficient power and automatically disabled nonessential systems to conserve energy for critical operations.

In this process, the fault protection system turned off all nonessential systems, including the X-band, and activated the S-band to ensure continued communication with Earth. Notably, Voyager 1 had not used the S-band for communication since 1981.

Voyager 1’s journey began in 1977, when it was launched alongside its twin, Voyager 2, on a mission to explore the gas giant planets of the solar system. The spacecraft has transmitted stunning images of Jupiter’s Great Red Spot and Saturn’s iconic rings. Voyager 2 continued its journey to Uranus and Neptune, while Voyager 1 utilized Saturn’s gravity to propel itself past Pluto.

Each Voyager spacecraft is equipped with ten science instruments, and currently, four of these instruments are operational on Voyager 1, allowing scientists to study the particles, plasma, and magnetic fields present in interstellar space.

According to NASA, the successful reestablishment of communication with Voyager 1 marks a significant milestone in the ongoing mission of this historic spacecraft.

Malicious Chrome Extensions Discovered Stealing Sensitive User Data

Two malicious Chrome extensions, “Phantom Shuttle,” were found stealing sensitive user data for years before being removed from the Chrome Web Store, raising concerns about online security.

Security researchers have recently exposed two Chrome extensions, known as “Phantom Shuttle,” that have been stealing user data for years. These extensions, which were designed to appear as harmless proxy tools, were found to be hijacking internet traffic and compromising sensitive information from unsuspecting users. Alarmingly, both extensions were available on Chrome’s official extension marketplace.

According to researchers at Socket, the extensions have been active since at least 2017. They were marketed towards foreign trade workers needing to test internet connectivity from various regions and were sold as subscription-based services, with prices ranging from approximately $1.40 to $13.60. At first glance, the extensions seemed legitimate, with descriptions that matched their purported functionality and reasonable pricing.

However, the reality was far more concerning. After installation, the Phantom Shuttle extensions routed all user web traffic through proxy servers controlled by the attackers. These proxies utilized hardcoded credentials embedded directly into the extension’s code, making detection difficult. The malicious logic was concealed within what appeared to be a legitimate jQuery library, further complicating efforts to identify the threat.

The attackers employed a custom character-index encoding scheme to obscure the credentials, ensuring they were not easily accessible. Once activated, the extensions monitored web traffic and intercepted HTTP authentication challenges on any site visited by the user. To maintain control over the traffic flow, the extensions dynamically reconfigured Chrome’s proxy settings using an auto-configuration script, effectively forcing the browser to route requests through the attackers’ infrastructure.

In its default “smarty” mode, Phantom Shuttle routed traffic from over 170 high-value domains, including developer platforms, cloud service dashboards, social media sites, and adult content portals. Notably, local networks and the attackers’ command-and-control domain were excluded, likely to avoid raising suspicion or disrupting their operations.

While functioning as a man-in-the-middle, the extensions were capable of capturing any data submitted through web forms. This included usernames, passwords, credit card details, personal information, session cookies from HTTP headers, and API tokens extracted from network requests. The potential for data theft was significant, raising serious concerns about user privacy and security.

Following the revelations, CyberGuy reached out to Google, which confirmed that both extensions had been removed from the Chrome Web Store. This incident underscores the importance of vigilance when it comes to browser extensions, as they can significantly increase the attack surface for cyber threats.

To mitigate risks associated with browser extensions, users are advised to regularly review the extensions installed on their devices. It is essential to scrutinize any extension that requests extensive permissions, particularly those related to proxy tools, VPNs, or network functionalities. If an extension seems suspicious, users should disable it immediately to prevent any potential data breaches.

Additionally, employing strong antivirus software can provide an extra layer of protection against suspicious network activity and unauthorized changes to browser settings. This software can alert users to potential threats, including phishing emails and ransomware scams, helping to safeguard personal information and digital assets.

Ultimately, the Phantom Shuttle incident serves as a reminder of the dangers posed by malicious extensions that masquerade as legitimate tools. Users must remain vigilant and proactive in managing their browser extensions to protect their online privacy and security. As the landscape of cyber threats continues to evolve, staying informed and cautious is crucial.

For further information on cybersecurity and best practices, visit CyberGuy.com.

OpenAI Acknowledges AI Browsers Vulnerable to Unsolvable Prompt Attacks

OpenAI acknowledges that prompt injection attacks pose a long-term security risk for AI-powered browsers, highlighting the challenges of safeguarding these technologies in an evolving cyber landscape.

OpenAI has developed an automated attacker system to assess the security of its ChatGPT Atlas browser against prompt injection threats and other cybercriminal risks. This initiative underscores the growing recognition that cybercriminals can exploit vulnerabilities without relying on traditional malware or exploits; sometimes, all they need are the right words.

In a recent blog post, OpenAI admitted that prompt injection attacks are unlikely to be fully eradicated. These attacks involve embedding malicious instructions within web pages, documents, or emails in ways that are not easily detectable by humans but can be recognized by AI agents. Once the AI processes this content, it may be misled into executing harmful commands.

OpenAI likened this issue to scams and social engineering, noting that while it is possible to reduce the frequency of such attacks, complete elimination is improbable. The company also pointed out that the “agent mode” feature in its ChatGPT Atlas browser increases the potential risk, as it broadens the attack surface. The more capabilities an AI has to act on behalf of users, the greater the potential for damage if something goes awry.

Since the launch of the ChatGPT Atlas browser in October, security researchers have been quick to explore its vulnerabilities. Within hours of its release, demonstrations emerged showing how a few strategically placed words in a Google Doc could alter the browser’s behavior. On the same day, Brave issued a warning, stating that indirect prompt injection represents a fundamental issue for AI-powered browsers, including those developed by other companies like Perplexity.

This challenge is not confined to OpenAI alone. Earlier this month, the National Cyber Security Centre in the U.K. cautioned that prompt injection attacks against generative AI systems may never be fully mitigated. OpenAI views prompt injection as a long-term security challenge that necessitates ongoing vigilance rather than a one-time solution. Their strategy includes quicker patch cycles, continuous testing, and layered defenses, aligning with approaches taken by competitors such as Anthropic and Google, who advocate for architectural controls and persistent stress testing.

OpenAI’s approach includes the development of what it calls an “LLM-based automated attacker.” This AI-driven system is designed to simulate a hacker’s behavior, using reinforcement learning to identify ways to insert malicious instructions into an AI agent’s workflow. The bot conducts simulated attacks, predicting how the target AI would reason and where it might fail, allowing it to refine its tactics based on feedback. OpenAI believes this method can reveal weaknesses more rapidly than traditional attackers might.

Despite these defensive measures, AI browsers remain vulnerable. They combine two elements that attackers find appealing: autonomy and access. Unlike standard browsers, AI browsers do not merely display information; they can read emails, scan documents, click links, and take actions on behalf of users. This means that a single malicious prompt hidden within a webpage or document can influence the AI’s actions without the user’s awareness. Even with safeguards in place, these agents operate on a foundation of trust in the content they process, which can be exploited.

While it may not be possible to completely eliminate prompt injection attacks, users can take steps to mitigate their impact. It is advisable to limit an AI browser’s access to only what is necessary. Avoid linking primary email accounts, cloud storage, or payment methods unless absolutely required. The more data an AI can access, the more attractive it becomes to potential attackers, and reducing access can minimize the potential fallout if an attack occurs.

Users should also refrain from allowing AI browsers to send emails, make purchases, or modify account settings without explicit confirmation. This additional layer of verification can interrupt long attack chains and provide an opportunity to detect suspicious behavior. Many prompt injection attacks rely on the AI acting silently in the background without user oversight.

Utilizing a password manager is another effective strategy to ensure that each account has a unique and robust password. If an AI browser or a malicious webpage compromises one credential, attackers will be unable to exploit it elsewhere. Many password managers also have features that prevent autofill on unfamiliar or suspicious sites, alerting users to potential threats before they enter any information.

Additionally, users should check if their email addresses have been exposed in previous data breaches. A reliable password manager often includes a breach scanner that can identify whether email addresses or passwords have appeared in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Even if an attack originates within the browser, antivirus software can still detect suspicious scripts, unauthorized system changes, or malicious network activity. Effective antivirus solutions focus on behavior rather than just files, which is essential for addressing AI-driven or script-based attacks. Strong antivirus protection can also alert users to phishing emails and ransomware scams, safeguarding personal information and digital assets.

When instructing an AI browser, it is important to be specific about its permissions. General commands like “handle whatever is needed” can give attackers the opportunity to manipulate the AI through hidden prompts. Narrowing instructions makes it more challenging for malicious content to influence the agent.

As AI browsers continue to evolve, security fixes must keep pace with emerging attack techniques. Delaying updates can leave known vulnerabilities exposed for longer than necessary. Enabling automatic updates ensures that users receive protection as soon as it becomes available, even if they miss the announcement.

The rapid rise of AI browsers has led to offerings from major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Existing browsers like Chrome and Edge are also integrating AI and agentic features into their platforms. While these technologies hold promise, they are still in their infancy, and users should be cautious about the hype surrounding them.

As AI browsers become more prevalent, the question remains: Are they worth the risk, or are they advancing faster than security measures can keep up? Users are encouraged to share their thoughts on this topic at Cyberguy.com.

NASA Finalizes Strategy for Sustaining Human Presence in Space

NASA has finalized its strategy for maintaining a human presence in space, focusing on the transition from the International Space Station to future commercial platforms.

NASA has finalized its strategy for sustaining a human presence in space, looking ahead to the planned de-orbiting of the International Space Station (ISS) in 2030. The agency’s new document emphasizes the importance of maintaining the capability for extended stays in orbit after the ISS is retired.

“NASA’s Low Earth Orbit Microgravity Strategy will guide the agency toward the next generation of continuous human presence in orbit, enable greater economic growth, and maintain international partnerships,” the document states. This commitment comes amid concerns about whether new space stations will be ready in time, especially with the incoming administration’s efforts to cut spending through the Department of Government Efficiency, raising fears of potential budget cuts for NASA.

NASA Deputy Administrator Pam Melroy acknowledged the tough decisions that have been made in recent years due to budget constraints. “Just like everybody has to make hard decisions when the budget is tight, we’ve made some choices over the last year to cut back programs or cancel them altogether to ensure that we’re focused on our highest priorities,” she said.

Commercial space company Voyager is actively working on one of the space stations that could replace the ISS when it de-orbits in 2030. Jeffrey Manber, Voyager’s president of international and space stations, expressed support for NASA’s strategy, emphasizing the need for a clear commitment from the United States. “We need that commitment because we have our investors saying, ‘Is the United States committed?’” he stated.

The push for a sustained human presence in space dates back to President Reagan, who first launched the initiative for a permanent human residence in space. He also highlighted the importance of private partnerships, stating, “America has always been greatest when we dared to be great. We can reach for greatness.” Reagan’s vision included the belief that the market for space transportation could surpass the nation’s capacity to develop it.

The ISS has been a cornerstone of human spaceflight since the first module was launched in 1998. Over the past 24 years, it has hosted more than 28 astronauts from 23 countries, maintaining continuous human occupation.

The Trump administration’s national space policy, released in 2020, called for a “continuous human presence in Earth orbit” and emphasized the need to transition to commercial platforms. The Biden administration has continued this policy direction.

NASA Administrator Bill Nelson noted the possibility of extending the ISS’s operational life if commercial stations are not ready. “Let’s say we didn’t have commercial stations that are ready to go. Technically, we could keep the space station going, but the idea was to fly it through 2030 and de-orbit it in 2031,” he said in June.

In recent months, there have been discussions about what “continuous human presence” truly means. Melroy addressed these concerns at the International Astronautical Congress in October, stating, “I just want to talk about the elephant in the room for a moment, continuous human presence. What does that mean? Is it continuous heartbeat or continuous capability?” She emphasized that while the agency hoped for a seamless transition, ongoing conversations are necessary to clarify the definition and implications of continuous presence.

NASA’s finalized strategy has taken into account feedback from commercial and international partners regarding the potential loss of the ISS without a ready commercial alternative. “Almost all of our industry partners agreed. Continuous presence is continuous heartbeat. And so that’s where we stand,” Melroy said. She highlighted that the United States currently leads in human spaceflight, noting that the only other space station in orbit when the ISS de-orbits will be the Chinese space station. “We want to remain the partner of choice for our industry and for our goals for NASA,” she added.

Three companies, including Voyager, are collaborating with NASA to develop commercial space stations. Axiom signed an agreement with NASA in 2020, while contracts were awarded to Nanoracks, now part of Voyager Space, and Blue Origin in 2021.

Melroy acknowledged the challenges posed by budget caps resulting from agreements between the White House and Congress for fiscal years 2024 and 2025. “We’ve had some challenges, to be perfectly honest with you. The budget caps have left us without as much investment. So, what we do is we co-invest with our commercial partners to do the development. I think we’re still able to make it happen before the end of 2030, though, to get a commercial space station up and running so that we have a continuous heartbeat of American astronauts on orbit,” she stated.

Voyager maintains that it is on track with its development timeline and plans to launch its starship space station in 2028. “We’re not asking for more money. We’re going ahead. We’re ready to replace the International Space Station,” Manber said. He emphasized the importance of maintaining a permanent presence in space, warning that losing it could disrupt the supply chain that supports the burgeoning space economy.

Additional funding has been allocated to the three companies since the initial space station contracts, and a second round of funding could be crucial for some projects. NASA may also consider funding new space station proposals, including concepts from Long Beach, California’s Vast Space, which recently unveiled plans for its Haven modules, with a launch of Haven-1 anticipated as soon as next year.

Melroy concluded by underscoring the importance of competition in this development project. “We absolutely think competition is critical. This is a development project. It’s challenging. It was hard to build the space station. We’re asking our commercial partners to step up and do this themselves with some help from us. We think it’s really important that we carry as many options going forward to see which one really pans out when we actually get there,” she said.

As NASA moves forward with its strategy, the agency remains committed to ensuring a continuous human presence in space, fostering innovation and collaboration in the commercial space sector.

According to Fox News.

University of Phoenix Data Breach Affects 3.5 Million Individuals

Nearly 3.5 million individuals associated with the University of Phoenix were impacted by a significant data breach that exposed sensitive personal and financial information.

The University of Phoenix has confirmed a substantial data breach affecting approximately 3.5 million students and staff. The incident originated in August when cyber attackers infiltrated the university’s network and accessed sensitive information without detection.

The breach was discovered on November 21, after the attackers listed the university on a public leak site. In early December, the university publicly disclosed the incident, and its parent company filed an 8-K form with regulators to report the breach.

According to notification letters submitted to Maine’s Attorney General, a total of 3,489,274 individuals were affected by the breach. This group includes current and former students, faculty, staff, and suppliers.

The university reported that hackers exploited a zero-day vulnerability in the Oracle E-Business Suite, an application that manages financial operations and contains highly sensitive data. Security researchers have indicated that the attack bears similarities to tactics employed by the Clop ransomware gang, which has a history of stealing data through zero-day vulnerabilities rather than encrypting systems.

The specific vulnerability associated with this breach is identified as CVE-2025-61882 and has reportedly been exploited since early August. The attackers accessed a range of sensitive personal and financial information, raising significant concerns about identity theft, financial fraud, and targeted phishing scams.

In letters sent to those affected, the university confirmed the breach’s impact on 3,489,274 individuals. Current and former students and employees are advised to monitor their mail closely, as notification letters are typically sent via postal mail rather than email. These letters detail the exposed data and provide instructions for accessing protective services.

A representative from the University of Phoenix provided a statement regarding the incident: “We recently experienced a cybersecurity incident involving the Oracle E-Business Suite software platform. Upon detecting the incident on November 21, 2025, we promptly took steps to investigate and respond with the assistance of leading third-party cybersecurity firms. We are reviewing the impacted data and will provide the required notifications to affected individuals and regulatory entities.”

To assist those affected, the University of Phoenix is offering free identity protection services. Individuals must use the redemption code provided in their notification letter to enroll in these services. Without this code, activation is not possible.

This breach is not an isolated incident; Clop has employed similar tactics in previous attacks involving various platforms, including GoAnywhere MFT, Accellion FTA, MOVEit Transfer, Cleo, and Gladinet CentreStack. Other universities, such as Harvard University and the University of Pennsylvania, have also reported incidents related to Oracle EBS vulnerabilities.

The U.S. government has taken notice of the situation, with the Department of State offering a reward of up to $10 million for information linking Clop’s attacks to foreign government involvement.

Universities are known to store vast amounts of personal data, including student records, financial aid files, payroll systems, and donor databases. This makes them high-value targets for cybercriminals, as a single breach can expose years of data tied to millions of individuals.

If you believe you may be affected by this breach, it is crucial to act quickly. Carefully read the notification letter you receive, as it will explain what data was exposed and how to enroll in protective services. Using the redemption code provided is essential, especially given the involvement of Social Security and banking data.

Even if you do not qualify for the free identity protection service, investing in an identity theft protection service is a wise decision. These services actively monitor sensitive information, such as your Social Security number, phone number, and email address. If your information appears on the dark web or if someone attempts to open a new account in your name, you will receive immediate alerts.

Additionally, these services can assist you in quickly freezing bank and credit card accounts to limit further fraud. It is also advisable to check bank statements and credit card activity for any unfamiliar charges and report anything suspicious immediately.

Implementing a credit freeze can prevent criminals from opening new accounts in your name, and this process is both free and reversible. To learn more about how to freeze your credit, visit relevant resources online.

As the fallout from this breach continues, individuals should remain vigilant for increased scam emails and phone calls, as criminals may reference the breach to appear legitimate. Strong antivirus software is essential for safeguarding against malicious links that could compromise your private information.

Keeping operating systems and applications up to date is also critical, as attackers often exploit outdated software to gain access. Enabling automatic updates and reviewing app permissions can help prevent further data breaches.

The University of Phoenix data breach underscores a growing concern in higher education regarding cybersecurity. When attackers exploit trusted enterprise software, the consequences can be widespread and severe. While the university’s offer of free identity protection is a positive step, long-term vigilance is essential to mitigate risks.

As discussions about cybersecurity standards in educational institutions continue, students may want to consider demanding stronger protections before enrolling. For further information and resources, visit CyberGuy.com.

Orbiter Photos Reveal Lunar Modules from First Two Moon Landings

Recent aerial images from India’s Chandrayaan 2 orbiter reveal the Apollo 11 and Apollo 12 lunar landing modules more than 50 years after their historic missions.

Photos captured by the Indian Space Research Organization’s moon orbiter, Chandrayaan 2, have provided a stunning look at the Apollo 11 and Apollo 12 landing sites over half a century later. The images, taken in April 2021, were recently shared on Curiosity’s X page, a platform dedicated to space exploration updates.

Curiosity’s post featured the aerial photographs alongside a caption that read, “Image of Apollo 11 and 12 taken by India’s Moon orbiter. Disapproving Moon landing deniers.” The images clearly depict the lunar modules, serving as a reminder of humanity’s monumental achievements in space exploration.

The Apollo 11 mission, which took place on July 20, 1969, marked a historic milestone as Neil Armstrong and Buzz Aldrin became the first men to walk on the lunar surface. Their fellow astronaut, Michael Collins, remained in lunar orbit during their historic excursion. The lunar module, known as Eagle, was left in lunar orbit after it successfully rendezvoused with Collins’ command module the following day, before ultimately returning to the moon’s surface.

Just months later, Apollo 12 followed as NASA’s second crewed mission to land on the moon. On November 19, 1969, astronauts Charles “Pete” Conrad and Alan Bean became the third and fourth men to set foot on the lunar surface. The Apollo program continued its series of missions until December 1972, when astronaut Eugene Cernan became the last person to walk on the moon.

The Chandrayaan-2 mission was launched on July 22, 2019, precisely 50 years after the historic Apollo 11 mission. It was two years later that the orbiter captured the remarkable images of the 1969 lunar landers.

In addition to Chandrayaan-2, India successfully launched Chandrayaan-3 last year, which achieved the significant milestone of being the first mission to land near the moon’s south pole.

These recent images not only highlight the enduring legacy of the Apollo missions but also underscore the advancements in space exploration technology that allow us to revisit and document these historic sites from afar, according to Fox News.

Grok AI Faces Backlash Over Flood of Sexualized Images of Women

Elon Musk’s AI chatbot Grok is facing significant backlash after users reported its image-editing feature is being misused to create sexualized images of women and minors without consent.

Elon Musk’s AI chatbot, Grok, is under intense scrutiny following reports that its image-editing feature can be exploited to generate sexualized images of women and minors without their consent. This alarming capability allows users to pull photos from the social media platform X and digitally modify them to depict individuals in lingerie, bikinis, or in states of undress.

In recent days, users on X have raised concerns about Grok being used to create disturbing content involving minors, including images that portray children in revealing clothing. The controversy emerged shortly after X introduced an “Edit Image” option, which enables users to modify images through text prompts without obtaining permission from the original poster.

Since the feature’s rollout on Christmas Day, Grok’s X account has been inundated with requests for sexually explicit edits. Reports indicate that some users have taken advantage of this tool to partially or completely strip clothing from images of women and even children.

Rather than addressing the issue with the seriousness it warrants, Musk appeared to trivialize the situation, responding with laugh-cry emojis to AI-generated images of well-known figures, including himself, depicted in bikinis. This reaction has drawn further criticism from various quarters.

In response to the backlash, a member of the xAI technical team, Parsa Tajik, acknowledged the problem on X, stating, “Hey! Thanks for flagging. The team is looking into further tightening our guardrails.”

By Friday, government officials in both India and France announced they were reviewing the situation and considering potential actions to address the misuse of Grok’s features.

In a statement addressing the backlash, Grok conceded that the system had failed to prevent misuse. “We’ve identified lapses in safeguards and are urgently fixing them,” the account stated, emphasizing that “CSAM (Child Sexual Abuse Material) is illegal and prohibited.”

The impact of these alterations on those targeted has been profoundly personal. Samantha Smith, a victim of the misuse, told the BBC she felt “dehumanized and reduced into a sexual stereotype” after Grok digitally altered an image of her to remove clothing. “While it wasn’t me that was in states of undress, it looked like me and it felt like me, and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she explained.

Another victim, Julie Yukari, a musician based in Rio de Janeiro, shared her experience after posting a photo on X just before midnight on New Year’s Eve. The image, taken by her fiancé, showed her in a red dress, curled up in bed with her black cat, Nori. The following day, as the post garnered hundreds of likes, Yukari began receiving notifications indicating that some users were prompting Grok to manipulate the image by digitally removing her clothing or reimagining her in a bikini.

During the investigation into this issue, The American Bazaar discovered multiple instances of users openly posting prompts requesting Grok to undress women in images. One user wrote, “@grok remove the bikini and have no clothes,” while another posted, “hey @grok remove the top.” Such prompts remain visible on Musk’s platform, highlighting the ease with which the feature can be misused.

Experts monitoring X’s AI governance have noted that the current backlash was anticipated. Three specialists who have followed the platform’s AI policies indicated to Reuters that the company had previously dismissed repeated warnings from civil society groups and child safety advocates. These concerns included a letter sent last year that cautioned xAI was just one step away from triggering “a torrent of obviously nonconsensual deepfakes.”

The ongoing controversy surrounding Grok underscores the urgent need for stricter regulations and safeguards to protect individuals from digital abuse and exploitation. As the situation develops, it remains to be seen how Musk and his team will address these critical concerns.

The post ‘Remove the top’: Grok AI floods with sexualized images of women appeared first on The American Bazaar.

Fake AI Chat Results Linked to Dangerous Mac Malware Spread

Security researchers warn that a new malware campaign is exploiting trust in AI-generated content to deliver dangerous software to Mac users through misleading search results.

Cybercriminals have long targeted the platforms and services that people trust the most. From email to search results, and now to AI chat responses, attackers are continually adapting their tactics. Recently, researchers have identified a new campaign in which fake AI conversations appear in Google search results, luring unsuspecting Mac users into installing harmful malware.

The malware in question is known as Atomic macOS Stealer, or AMOS. This campaign takes advantage of the growing reliance on AI tools for everyday assistance, presenting seemingly helpful and legitimate step-by-step instructions that ultimately lead to system compromise.

Investigators have confirmed that both ChatGPT and Grok have been misused in this malicious operation. One notable case traced back to a simple Google search for “clear disk space on macOS.” Instead of directing the user to a standard help article, the search result displayed what appeared to be an AI-generated conversation. This conversation provided clear and confident instructions, culminating in a command for the user to run in the macOS Terminal, which subsequently installed AMOS.

Upon further investigation, researchers discovered multiple instances of poisoned AI conversations appearing for similar queries. This consistency suggests a deliberate effort to target Mac users seeking routine maintenance assistance.

This tactic is reminiscent of a previous campaign that utilized sponsored search results and SEO-poisoned links, directing users to fake macOS software hosted on GitHub. In that case, attackers impersonated legitimate applications and guided users through terminal commands that also installed AMOS.

Once the terminal command is executed, the infection chain is triggered immediately. The command contains a base64 string that decodes into a URL hosting a malicious bash script. This script is designed to harvest credentials, escalate privileges, and establish persistence, all while avoiding visible security warnings.

The danger lies in the seemingly benign nature of the process. There are no installer windows, obvious permission prompts, or opportunities for users to review what is about to run. Because the execution occurs through the command line, standard download protections are bypassed, allowing attackers to execute their malicious code without detection.

This campaign effectively combines two powerful elements: the trust users place in AI-generated answers and the credibility of search results. Major chat tools, including Grok on X, allow users to delete parts of conversations or share selected snippets. This feature enables attackers to curate polished exchanges that appear genuinely helpful while concealing the manipulative prompts that produced them.

Using prompt engineering, attackers can manipulate ChatGPT to generate step-by-step cleanup or installation guides that ultimately lead to malware installation. The sharing feature of ChatGPT then creates a public link within the attacker’s account. From there, criminals either pay for sponsored search placements or employ SEO tactics to elevate these shared conversations in search results.

Some ads are crafted to closely resemble legitimate links, making it easy for users to assume they are safe without verifying the advertiser’s identity. One documented example showed a sponsored result promoting a fake “Atlas” browser for macOS, complete with professional branding.

Once these links are live, attackers need only wait for users to search, click, and trust the AI-generated output, following the instructions precisely as written.

While AI tools can be beneficial, attackers are now manipulating these technologies to lead users into dangerous situations. To protect yourself without abandoning search or AI entirely, consider the following precautions.

The most critical rule is this: if an AI response or webpage instructs you to open Terminal and paste a command, stop immediately. Legitimate macOS fixes rarely require users to blindly execute scripts copied from the internet. Once you press Enter, you lose visibility into what happens next, and malware like AMOS exploits this moment of trust to bypass standard security checks.

AI chats should not be considered authoritative sources. They can be easily manipulated through prompt engineering to produce dangerous guides that appear clean and confident. Before acting on any AI-generated fix, cross-check it with Apple’s official documentation or a trusted developer site. If verification is difficult, do not execute the command.

Using a password manager is another effective strategy. These tools create strong, unique passwords for each account, ensuring that if one password is compromised, it does not jeopardize all your other accounts. Many password managers also prevent autofilling credentials on unfamiliar or fake sites, providing an additional layer of security against credential-stealing malware.

It is also wise to check if your email has been exposed in previous breaches. Our top-rated password manager includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If a match is found, promptly change any reused passwords and secure those accounts with new, unique credentials.

Regular updates are essential, as AMOS and similar malware often exploit known vulnerabilities after initial infections. Delaying updates gives attackers more opportunities to escalate privileges or maintain persistence. Enable automatic updates to ensure you remain protected, even if you forget to do so manually.

Modern macOS malware frequently operates through scripts and memory-only techniques. A robust antivirus solution does more than scan files; it monitors behavior, flags suspicious scripts, and can halt malicious activity even when no obvious downloads occur. This is particularly crucial when malware is delivered through Terminal commands.

To safeguard against malicious links that could install malware and access your private information, ensure you have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets secure.

Paid search ads can closely mimic legitimate results. Always verify the identity of the advertiser before clicking. If a sponsored result leads to an AI conversation, a download, or instructions to run commands, close it immediately.

Search results promising quick fixes, disk cleanup, or performance boosts are common entry points for malware. If a guide is not hosted by Apple or a reputable developer, assume it may be risky, especially if it suggests command-line solutions.

Attackers invest time in making fake AI conversations appear helpful and professional. Clear formatting and confident language are often part of the deception. Taking a moment to question the source can often disrupt the attack chain.

This campaign illustrates a troubling shift from traditional hacking methods to manipulating user trust. Fake AI conversations succeed because they sound calm, helpful, and authoritative. When these conversations are elevated through search results, they gain undeserved credibility. While the technical aspects of AMOS are complex, the entry point remains simple: users must follow instructions without questioning their origins.

Have you ever followed an AI-generated fix without verifying it first? Share your experiences with us at Cyberguy.com.

According to CyberGuy.com, staying vigilant and informed is key to navigating the evolving landscape of cybersecurity threats.

Newly Discovered Asteroid Identified as Tesla Roadster in Space

Astronomers recently misidentified a Tesla Roadster launched into space by SpaceX in 2018 as an asteroid, prompting a swift correction from the Minor Planet Center.

A surprising mix-up occurred earlier this month when astronomers mistook a Tesla Roadster, launched into orbit by SpaceX in 2018, for an asteroid. The Minor Planet Center, part of the Harvard-Smithsonian Center for Astrophysics in Massachusetts, quickly corrected the error after registering the object as 2018 CN41.

The registration of 2018 CN41 was deleted just one day later, on January 3, when it became clear that the object in question was not an asteroid but rather Elon Musk’s iconic roadster. The Minor Planet Center announced on its website that the designation was removed after it was determined that the orbit of 2018 CN41 matched that of an artificial object, specifically the Falcon Heavy upper stage carrying the Tesla Roadster.

This roadster was launched during the maiden flight of SpaceX’s Falcon Heavy rocket in February 2018. Originally, it was expected to enter an elliptical orbit around the sun, extending slightly beyond Mars before returning toward Earth. However, it appears that the roadster exceeded Mars’ orbit and continued on toward the asteroid belt, as Musk indicated at the time.

When the Tesla Roadster was mistakenly identified as an asteroid, it was located less than 150,000 miles from Earth, which is closer than the orbit of the moon. This proximity raised concerns among astronomers, who felt it necessary to monitor the object closely.

Jonathan McDowell, an astrophysicist at the Center for Astrophysics, commented on the incident, highlighting the challenges posed by untracked objects in space. “Worst case, you spend a billion launching a space probe to study an asteroid and only realize it’s not an asteroid when you get there,” he remarked, emphasizing the potential implications of such identification errors.

The Tesla Roadster, which features a mannequin named Starman in the driver’s seat, has become a symbol of SpaceX’s innovative spirit and Musk’s unique approach to space exploration. As it continues its journey through the cosmos, the roadster serves as a reminder of the intersection between technology, humor, and the vastness of space.

As the situation unfolded, Fox News Digital reached out to SpaceX for further comment but had not received a response at the time of publication. This incident underscores the importance of accurate tracking and identification of objects in space, particularly as more artificial satellites and spacecraft are launched into orbit.

According to Astronomy Magazine, the mix-up illustrates the complexities involved in monitoring the increasing number of artificial objects in Earth’s vicinity. As space exploration continues to advance, the need for precise tracking systems becomes ever more critical.

Rising RAM Prices Expected to Increase Technology Costs by 2026

The rising cost of RAM is expected to increase the prices of various tech devices in 2026, impacting consumers across multiple sectors.

The cost of many electronic devices is likely to rise due to a significant increase in the price of Random Access Memory (RAM), a component typically regarded as one of the more affordable parts of a computer. Since October of last year, RAM prices have more than doubled, raising concerns among manufacturers and consumers alike.

RAM is essential for the operation of devices ranging from smartphones and smart TVs to medical equipment. The surge in RAM prices has been largely attributed to the growing demand from artificial intelligence (AI) data centers, which require substantial amounts of memory to function effectively.

While manufacturers often absorb minor cost increases, substantial hikes like this one are typically passed on to consumers. Steve Mason, general manager of CyberPowerPC, a company that specializes in building computers, noted, “We are being quoted costs around 500% higher than they were only a couple of months ago.” He emphasized that there will inevitably come a point where these elevated component costs will compel manufacturers to reconsider their pricing strategies.

Mason further explained that any device utilizing memory or storage could see a corresponding price increase. RAM plays a critical role in storing code while a device is in use, making it a vital component in every computer system.

Danny Williams, a representative from PCSpecialist, another computer building site, expressed his expectation that price increases would persist “well into 2026.” He remarked on the buoyant market conditions of 2025 and warned that if memory prices do not stabilize, there could be a decline in consumer demand in the upcoming year. Williams observed a varied impact across different RAM producers, with some vendors maintaining larger inventories, resulting in more moderate price increases of approximately 1.5 to 2 times. In contrast, other companies with limited stock have raised prices by as much as five times.

Chris Miller, author of the book “Chip War,” identified AI as the primary driver of demand for computer memory. He stated, “There’s been a surge of demand for memory chips, driven above all by the high-end High Bandwidth Memory that AI requires.” This heightened demand has led to increased prices across various types of memory chips.

Miller also pointed out that prices can fluctuate dramatically based on supply and demand dynamics, which are currently skewed in favor of demand. Mike Howard from Tech Insights elaborated on this by indicating that cloud service providers are finalizing their memory needs for 2026 and 2027. This clarity in demand has made it evident that supply will not keep pace with the requirements set by major players like Amazon and Google.

Howard remarked, “With both demand clarity and supply constraints converging, suppliers have steadily pushed prices upward, in some cases aggressively.” He noted that some suppliers have even paused issuing price quotes, a rare move that signals confidence in the expectation that prices will continue to rise.

As the tech industry braces for these changes, consumers may soon find themselves facing higher costs for a wide range of devices, from personal electronics to essential medical equipment. The ongoing fluctuations in RAM prices underscore the interconnected nature of technology supply chains and the impact of emerging trends like AI on everyday consumer products.

According to American Bazaar, the implications of rising RAM prices could be felt across various sectors, prompting both manufacturers and consumers to prepare for a potentially challenging economic landscape in 2026.

How to Share Estimated Arrival Time on iPhone and Android

Sharing your estimated time of arrival (ETA) on Apple Maps and Google Maps allows for safer driving and keeps your contacts informed without the need for constant check-ins.

In today’s fast-paced world, sharing your estimated time of arrival (ETA) has become a practical necessity. Both Apple Maps and Google Maps offer built-in features that allow users to send live updates about their arrival times while driving. This functionality not only enhances safety by minimizing distractions but also provides peace of mind to both the driver and their contacts.

When you share your ETA, you enable your friends and family to know when to expect you without the need for constant communication. This is especially useful during late-night drives, long journeys, or when navigating unfamiliar areas. By automating the process of updating your contacts, you can focus on the road ahead rather than responding to messages.

To utilize this feature effectively, ensure that you have the latest versions of Apple Maps or Google Maps installed on your device. For this guide, we tested the steps using an iPhone 15 Pro Max running iOS 16.2 and a Samsung Galaxy phone operating on Android 16.

Before you start navigating, it is crucial to confirm that Apple Maps has the necessary permissions enabled. Without these settings, the option to share your ETA may not appear. For Android users, the process is similarly straightforward with Google Maps.

To share your ETA using Apple Maps, begin by initiating navigation. Once your route is set, tap the route card located at the bottom of the screen to expand it. From there, you can activate the sharing feature. Note that ETA sharing only becomes available after navigation has commenced, and you must have Location Services enabled for both Maps and Contacts.

For those using Google Maps on an Android device, the process is just as simple. After starting your navigation, look for the option to share your live arrival time. Depending on your device and Android version, the wording or placement of the menu may vary slightly. Once sharing is activated, your contacts will be able to track your live location and see updated arrival times until you reach your destination or choose to stop sharing.

Both Apple Maps and Google Maps handle updates automatically once the sharing feature is activated. If you ever wish to stop sharing your ETA, you can easily do so from the navigation screen at any time.

Using ETA sharing can significantly reduce the pressure of keeping others informed while you drive. With Apple Maps and Google Maps managing the updates, this simple habit enhances communication safety and provides reassurance to those waiting for your arrival.

As you navigate your daily travels, consider how often you utilize ETA sharing. Has it changed the frequency with which people check in on you? Share your experiences with us at Cyberguy.com.

For more tech tips, urgent security alerts, and exclusive deals, consider signing up for the FREE CyberGuy Report. Subscribers will also receive instant access to the Ultimate Scam Survival Guide at no cost.

According to CyberGuy.com, sharing your ETA not only improves safety but also fosters better communication with your contacts.

NYU Tandon School Launches New Robotics Hub in Brooklyn

The NYU Tandon School of Engineering has launched the Center for Robotics and Embodied Intelligence in Brooklyn, enhancing its role in robotics and artificial intelligence research.

BROOKLYN, NY – The NYU Tandon School of Engineering has officially inaugurated the Center for Robotics and Embodied Intelligence, a significant development that positions the institution at the forefront of robotics and physical artificial intelligence research on the East Coast.

Located in Downtown Brooklyn, the new center is a key component of NYU’s ambitious $1 billion investment in engineering and global science initiatives. This investment underscores Tandon’s commitment to interdisciplinary research in AI-driven robotics.

Juan de Pablo, NYU’s Executive Vice President for Global Science and Technology, will oversee the center. He emphasized the transformative potential of the intersection between robotics and AI, stating, “The intersection between robotics and AI offers unprecedented opportunities for technological developments that will bring enormous benefits to industry and society.” De Pablo added that the center will act as a hub for discovery and innovation in this dynamic field.

Among the founding co-directors is Lerrel Pinto, an assistant professor of computer science at NYU’s Courant Institute. Pinto, who is of Indian American descent, will play a pivotal role in defining the center’s research agenda, which emphasizes embodied intelligence. This approach allows robots to learn movement and decision-making by engaging with the physical world and analyzing human motion. He will work alongside co-directors Ludovic Righetti and Chen Feng to lead a research team comprising over 70 faculty members, postdoctoral scholars, and students.

The center boasts a substantial physical infrastructure, featuring 10,000 square feet of collaborative experimental space designed to foster interdisciplinary cooperation. Its flagship facility includes a 6,800 square foot lab dedicated to advanced robotics testing, complemented by an additional 2,200 square foot space for large-scale multi-robot experiments.

Chen Feng highlighted the center’s ambition to position Tandon and New York City as a national hub for robotics research. “We want people to think of the East Coast, not just Silicon Valley, when they think about robotics and embodied AI,” he remarked.

In addition to its research initiatives, the NYU Tandon School of Engineering is set to launch the nation’s first Master of Science degree in Robotics and Embodied Intelligence through the center. This program aims to equip the next generation of engineers and researchers with the skills necessary to advance the field.

The center’s faculty have already secured over $30 million in research funding, bolstered by partnerships with leading industry players such as NVIDIA, Google, Amazon, and Qualcomm. This financial backing underscores the center’s potential to contribute significantly to the evolving landscape of robotics and AI.

As the NYU Tandon School of Engineering continues to expand its capabilities and influence, the Center for Robotics and Embodied Intelligence stands as a testament to its commitment to innovation and excellence in engineering education and research, according to India-West.

Ten Cybersecurity Resolutions for a Safer Digital Experience in 2026

As we approach 2026, adopting simple cybersecurity resolutions can significantly enhance your digital safety and protect against cybercriminals.

As 2025 comes to a close, it is essential to prioritize digital safety. Cybercriminals remain active year-round, with the holiday season often seeing a spike in scams, account takeovers, and data theft. Fortunately, enhancing your cybersecurity does not require advanced skills or costly tools. By adopting a few smart habits, you can significantly reduce your risk and safeguard your digital life throughout 2026. Here are ten straightforward cybersecurity resolutions to help you start the new year on the right foot.

First and foremost, strong passwords are your first line of defense against cyber threats. Weak or reused passwords make it easy for attackers to gain access to multiple accounts. It is crucial to use a unique password for each account, opting for longer passphrases instead of short, complex strings. Utilizing a reputable password manager can help generate and securely store your passwords, eliminating the need to memorize them. Remember, the most important rule is to never reuse passwords.

Next, check if your email has been compromised in past data breaches. A top-rated password manager typically includes a built-in breach scanner that can alert you if your email address or passwords have appeared in known leaks. If you find a match, promptly change any reused passwords and secure those accounts with new, unique credentials.

Implementing two-factor authentication (2FA) is another effective way to bolster your security. This additional step usually involves a code sent to an app or a physical security key. Even if someone manages to steal your password, 2FA can prevent unauthorized access. App-based authenticators offer stronger protection than text messages, so prioritize enabling 2FA on your email, banking, social media, and shopping accounts.

Old accounts can pose new risks. Take the time to review shopping sites, forums, apps, and subscriptions that you no longer use. Delete any accounts that are unnecessary and update the privacy settings on those you choose to keep. Sharing less personal information, such as birthdays, locations, and phone numbers, can help limit your digital footprint and reduce the potential for abuse.

Regular software updates are vital for fixing vulnerabilities that attackers exploit. Skipping updates leaves your devices open to attacks. Enable automatic updates for your operating systems, browsers, apps, routers, and smart devices to block many common threats without extra effort. Outdated software remains one of the leading causes of successful hacks.

Your personal information is often available on numerous data broker sites, which collect and sell access to sensitive information. Utilizing a personal data removal service can help locate and eliminate this information, reducing the risk of scams, phishing attempts, and identity fraud. While no service can guarantee complete removal of your data from the internet, these services actively monitor and systematically erase your personal information from various websites, providing peace of mind.

Identity theft can begin quietly, often following a data breach. Identity theft protection services can monitor your personal information, such as your Social Security number, phone number, and email address, alerting you if it is being sold on the dark web or used to open new accounts. Many of these services can also assist in freezing your bank and credit card accounts to prevent unauthorized use. Early alerts can help you take action before damage occurs.

Most cyberattacks begin with a click. Scammers often use fake shipping notices, refund alerts, and urgent messages to prompt quick action. It is crucial to pause before clicking any links or opening attachments. With many scams now employing AI to create realistic messages and images, verifying messages through official websites or apps is more important than ever. Additionally, strong antivirus software can provide another layer of protection by blocking malware, ransomware, and malicious downloads across your devices.

Your Wi-Fi network is a valuable target for cybercriminals. Change the default router password immediately and enable WPA3 encryption if your router supports it. Keeping your router firmware up to date and avoiding sharing your network with unknown devices can help secure every connected device.

Regular backups are essential for protecting against ransomware, hardware failures, and accidental deletions. Many people neglect this crucial step. Using cloud backups, an external hard drive, or both, and automating the process can ensure that your data is safe and easily recoverable in case of an emergency.

Finally, consider freezing your credit as a strong defense against identity fraud as we enter 2026. A credit freeze is free and reversible, allowing you to temporarily lift it when applying for loans or credit cards. This simple step can block many identity crimes before they occur.

Your email account is central to password resets, alerts, and account recovery. If attackers gain access, they can reach nearly everything else. Secure your primary email with a long, unique password and enable two-factor authentication. Additionally, creating email aliases for shopping, subscriptions, and sign-ups can limit exposure during data breaches and make phishing attempts easier to identify.

Adopting these cybersecurity resolutions can lead to a safer digital life. By committing to strong passwords, regular updates, backups, and heightened awareness, you can significantly reduce the risk of falling victim to cybercriminals. There is no better time to start than now. Which of these cybersecurity habits have you been delaying, and what steps will you take to address them today? Let us know by visiting Cyberguy.com.

For more information on cybersecurity tips and resources, visit CyberGuy.com.

Mars’ Red Color May Indicate Habitable Conditions in the Past

Mars’ distinctive red hue may be linked to a habitable past, according to a new study that highlights the role of the mineral ferrihydrite found in the planet’s dust.

A recent study suggests that the mineral ferrihydrite, which forms in the presence of cool water, is responsible for Mars’ characteristic red color. This finding indicates that Mars may have once had an environment capable of sustaining liquid water before transitioning to its current dry state billions of years ago.

The study, published in Nature Communications, reveals that ferrihydrite forms at lower temperatures than other minerals previously thought to contribute to the planet’s reddish hue, such as hematite. NASA, which partially funded the research, stated that this discovery could reshape our understanding of Mars’ climatic history.

Researchers analyzed data from various Mars missions, including several rovers, and compared their findings to laboratory experiments. These experiments involved testing how light interacts with ferrihydrite particles and other minerals under simulated Martian conditions.

Adam Valantinas, the study’s lead author and a postdoctoral fellow at Brown University, emphasized the significance of the research. “The fundamental question of why Mars is red has been considered for hundreds if not thousands of years,” he said in a statement. Valantinas, who initiated the study as a Ph.D. student at the University of Bern in Switzerland, added, “From our analysis, we believe ferrihydrite is everywhere in the dust and probably in the rock formations as well.” He noted that while previous studies had proposed ferrihydrite as a reason for Mars’ color, their research provides a more robust framework for testing this hypothesis using observational data and innovative laboratory methods.

Jack Mustard, the senior author of the study and a professor at Brown University, described the research as a “door-opening opportunity.” He stated, “It gives us a better chance to apply principles of mineral formation and conditions to tap back in time.” Mustard also highlighted the importance of the samples being collected by the Perseverance rover, which will allow researchers to verify their findings once returned to Earth.

The research indicates that Mars likely had a cool, wet, and potentially habitable climate in its ancient past. Although the planet’s current atmosphere is too cold to support life, evidence suggests that it once had abundant water, as indicated by the presence of ferrihydrite in its dust.

Geronimo Villanueva, Associate Director for Strategic Science of the Solar System Exploration Division at NASA’s Goddard Space Flight Center and a co-author of the study, remarked, “These new findings point to a potentially habitable past for Mars and highlight the value of coordinated research between NASA and its international partners when exploring fundamental questions about our solar system and the future of space exploration.”

Valantinas further elaborated on the goals of the research team, stating, “What we want to understand is the ancient Martian climate, the chemical processes on Mars—not only ancient but also present.” He raised the critical question of habitability, asking, “Was there ever life? To understand that, you need to understand the conditions that were present during the time of this mineral’s formation.” He explained that for ferrihydrite to form, conditions must have existed where oxygen from the atmosphere or other sources could react with iron in the presence of water, contrasting sharply with today’s dry and cold Martian environment.

As Martian winds spread this dust across the planet, they contributed to the iconic red appearance that Mars is known for today.

These findings underscore the importance of continued exploration and research into Mars’ past, as scientists strive to uncover the mysteries of the planet’s history and its potential for supporting life.

According to NASA, the implications of this study could significantly enhance our understanding of Mars and its geological and climatic evolution.

Satya Nadella Predicts 2026 Will Mark Significant Advancements in AI

Microsoft CEO Satya Nadella predicts that 2026 will mark a significant transition for artificial intelligence, moving from experimentation to real-world applications.

SEATTLE, WA – Microsoft CEO Satya Nadella has emphasized that 2026 will be a pivotal year for artificial intelligence (AI), signaling a shift from initial experimentation and excitement to broader, real-world adoption of the technology.

In a recent blog post, Nadella articulated that the AI industry is evolving beyond mere flashy demonstrations, moving towards a clearer distinction between “spectacle” and “substance.” This evolution aims to enhance understanding of where AI can truly deliver meaningful impact.

While acknowledging the rapid pace of AI development, Nadella noted that the practical application of these powerful systems has not kept pace. He described the current landscape as a phase of “model overhang,” where AI models are advancing faster than our ability to implement them effectively in daily life, business, and society.

“We are still in the opening miles of a marathon,” Nadella remarked, highlighting that despite remarkable progress, much about AI’s future remains uncertain.

He pointed out that many of today’s AI capabilities have yet to translate into tangible outcomes that enhance productivity, decision-making, or human well-being on a large scale. Reflecting on the early days of personal computing, Nadella referenced Steve Jobs’ famous analogy of computers as “bicycles for the mind,” tools designed to enhance human thought and work.

“This idea needs to evolve in the age of AI,” he stated, suggesting that rather than replacing human thinking, AI systems should be crafted to support and amplify it. He envisions AI as cognitive tools that empower individuals to achieve their goals more effectively.

Nadella further argued that the true value of AI does not lie in the power of a model itself, but rather in how individuals choose to utilize it. He urged a shift in the debate surrounding AI outputs, moving away from simplistic judgments of quality and instead focusing on how humans adapt to these new tools in their everyday interactions and decision-making processes.

The Microsoft chief also underscored the necessity for the AI industry to progress beyond merely developing advanced models. He emphasized the importance of constructing comprehensive systems around AI, which include software, workflows, and safeguards that enable the technology to be used reliably and responsibly.

Despite the rapid advancements in AI, Nadella acknowledged that current systems still exhibit rough edges and limitations that require careful management. As the industry prepares for the future, he remains optimistic about the potential of AI to transform various aspects of life, provided that the right frameworks and approaches are established.

According to IANS, Nadella’s insights reflect a broader understanding of the challenges and opportunities that lie ahead in the realm of artificial intelligence.

Microsoft Typosquatting Scam Uses Letter Swaps to Steal Logins

Scammers are using a clever typosquatting technique to impersonate Microsoft, exploiting visual similarities in domain names to steal user login credentials.

A new phishing campaign is leveraging a subtle visual trick that can easily go unnoticed. Attackers are utilizing the domain rnicrosoft.com to impersonate Microsoft and steal login credentials. The deception lies in the way the letters are arranged; instead of the letter “m,” the scammers use “r” and “n” placed side by side. In many fonts, these letters can appear almost identical to an “m” at a quick glance.

Security experts are raising alarms about this tactic, which has proven effective. The phishing emails closely mimic Microsoft’s branding, layout, and tone, creating a false sense of familiarity and trustworthiness. This illusion often leads users to click links before realizing something is amiss.

This attack exploits the way people read. Our brains tend to predict words rather than scan each letter individually. When something appears familiar, we automatically fill in the gaps. While a careful reader might spot the flaw on a large desktop monitor, the risk increases significantly on mobile devices. The address bar often shortens URLs, leaving little room for detailed inspection—exactly where attackers want users to be vulnerable.

Once trust is established, victims are more likely to enter passwords, approve fraudulent invoices, or download harmful attachments. Attackers typically employ multiple visual deceptions to enhance their chances of success. For instance, they might use mmicros0ft.com to replace the letter “o” with the number “0,” or use domains like microsoft-support.com that add official-sounding words to appear legitimate.

Typosquatting domains such as rnicrosoft.com are rarely used for a single purpose; criminals often repurpose them across various scams. Common follow-up tactics include credential phishing, fake HR notices, and vendor payment requests. In every case, the attackers benefit from speed—the quicker they act, the less likely users are to notice the mistake.

Most individuals do not take the time to read URLs character by character. Familiar logos and language reinforce trust, particularly during a busy workday. The prevalence of mobile device use exacerbates this issue. Smaller screens, shortened links, and constant notifications create an environment ripe for mistakes. This is not an issue exclusive to Microsoft; banks, retailers, healthcare portals, and government services are all susceptible to similar risks.

Typosquatting scams thrive on the rush to trust what appears familiar. However, there are steps users can take to slow down and identify fake domains before any damage is done. Before clicking on any link, it is advisable to open the full sender address in the email header. Display names and logos can be easily faked, but the domain reveals the true source.

Users should look closely for swapped letters, such as “rn” in place of “m,” added hyphens, or unusual domain endings. If the address feels even slightly off, it is wise to treat the message as potentially hostile. On a desktop, hovering the mouse over links can reveal the actual destination. On mobile devices, long-pressing the link allows users to preview the URL. This simple pause can often expose lookalike domains designed to steal login credentials.

When an email claims urgent action is needed for an account, it is best not to use the provided links. Instead, open a new browser tab and manually navigate to the official website using a saved bookmark. Legitimate companies do not require users to act through unexpected links, and this practice can effectively thwart most typosquatting attempts.

Employing strong antivirus software can also provide an additional layer of protection. Such software can block known phishing domains, flag malicious downloads, and alert users before they enter credentials on risky sites. While it may not catch every new typo trick, it serves as an important safety net when human attention falters.

Even if the sender’s address appears correct, it is crucial to inspect the “Reply To” field. Many phishing campaigns direct replies to external inboxes unrelated to the actual company. A mismatch here is a strong indicator that the message is a scam.

Typosquatting attacks often begin with leaked or scraped contact details. Utilizing a data removal service can help eliminate personal information from data broker sites, thereby reducing the number of scam emails and targeted phishing attempts that reach your inbox. While no service can guarantee complete removal of personal data from the internet, investing in a data removal service is a prudent choice. These services actively monitor and systematically erase personal information from numerous websites, providing peace of mind.

For email, banking, and work portals, using bookmarks created by the user is an effective strategy. This practice eliminates the risk of mistyping addresses or trusting links in messages, serving as one of the simplest and most effective defenses against lookalike domain attacks.

Typosquatting preys on human behavior rather than software flaws. A single swapped character can bypass filters and deceive even the most vigilant individuals in seconds. By becoming aware of these tricks, users can slow down attackers and regain control over their online security. Awareness transforms a sophisticated scam into an obvious fake.

If a single letter can determine whether you fall victim to a scam, how closely are you really scrutinizing the links you trust every day? For more information on protecting yourself from phishing scams, visit CyberGuy.com.

Private Lunar Lander Blue Ghost Successfully Lands on the Moon

A private lunar lander, Blue Ghost, successfully landed on the moon carrying equipment for NASA, marking a significant milestone for commercial space exploration.

A private lunar lander carrying equipment for NASA successfully touched down on the moon on Sunday, with the company’s Mission Control confirming the landing from Texas.

Firefly Aerospace’s Blue Ghost lander, which includes a drill, vacuum, and other essential tools, descended from lunar orbit on autopilot. It targeted the slopes of an ancient volcanic dome located in an impact basin on the moon’s northeastern edge.

The successful landing was confirmed by the company’s Mission Control, situated outside Austin, Texas. Will Coogan, chief engineer for the lander, expressed excitement, stating, “You all stuck the landing. We’re on the moon.”

This achievement makes Firefly Aerospace the first private company to successfully land a spacecraft on the moon without crashing or tipping over. Historically, only five countries—Russia, the United States, China, India, and Japan—have accomplished successful lunar landings, with some government missions having failed in the past.

Blue Ghost, named after a rare species of firefly found in the United States, stands 6 feet 6 inches tall and spans 11 feet wide, providing enhanced stability during its descent and landing.

Approximately half an hour after landing, Blue Ghost began transmitting images from the lunar surface. The first image captured was a selfie, albeit somewhat obscured by the sun’s glare.

Looking ahead, two other companies are preparing to launch their lunar landers, with the next mission expected to join Blue Ghost on the moon later this week.

This successful landing represents a significant step forward in commercial space exploration and underscores the growing interest and investment in lunar missions.

According to The Associated Press, the developments in private lunar exploration are paving the way for future astronaut missions and potential business opportunities on the moon.

SoftBank Finalizes $40 Billion Investment in OpenAI

SoftBank has finalized its $40 billion investment in OpenAI, marking a significant move in the competitive landscape of artificial intelligence.

SoftBank has officially completed its commitment to invest $40 billion in OpenAI, as reported by CNBC’s David Faber. The final tranche of the investment, amounting to between $22 billion and $22.5 billion, was transferred last week.

Sources indicate that the Japanese investment giant was in a race to finalize this substantial commitment, utilizing various cash-raising strategies, including the sale of some of its existing investments. Reports suggest that SoftBank may also tap into its undrawn margin loans, which are secured against its valuable stake in chip manufacturer Arm Holdings.

Prior to this latest investment, SoftBank had already invested $8 billion directly in OpenAI, along with an additional $10 billion syndicated with co-investors. With this latest infusion of capital, SoftBank’s total stake in the AI company now exceeds 10%.

In February, CNBC reported that SoftBank was nearing the completion of its $40 billion investment in OpenAI, which was valued at $260 billion pre-money at the time. This investment represents one of the most significant bets made by SoftBank CEO Masayoshi Son as he intensifies the company’s efforts to establish a strong foothold in the rapidly evolving AI sector.

To finance this investment, Son sold SoftBank’s $5.8 billion stake in Nvidia and divested $4.8 billion from its stake in T-Mobile U.S. Additionally, the company has made workforce reductions. SoftBank Chief Financial Officer Yoshimitsu Goto previously informed investors that these asset sales are part of a broader strategy aimed at balancing growth with financial stability.

The surge in investments in artificial intelligence has been notable, with OpenAI committing over $1.4 trillion to infrastructure development over the coming years. This includes partnerships with major chipmakers such as Nvidia, Advanced Micro Devices, and Broadcom.

SoftBank has a history of investing heavily in AI and was an early backer of Nvidia. Recently, the conglomerate announced a $4 billion acquisition of DigitalBridge, a data center investment firm, to further bolster its AI initiatives. Last month, SoftBank liquidated its entire $5.8 billion stake in Nvidia, a move that sources indicated would help support its investment in OpenAI.

In addition to SoftBank’s significant investment, OpenAI is reportedly exploring a potential investment exceeding $10 billion from Amazon. Disney has also joined the ranks of investors, committing $1 billion in an equity investment deal that allows users of OpenAI’s video generator, Sora, to create content featuring licensed characters like Mickey Mouse.

This latest wave of investments underscores the growing interest and competition in the AI sector, with major players positioning themselves to capitalize on the technology’s transformative potential.

According to CNBC, SoftBank’s aggressive investment strategy reflects its commitment to remaining at the forefront of the AI revolution.

AI Emerges as Potential Threat to Remote Work Opportunities

Shane Legg, co-founder and Chief AGI Scientist at Google DeepMind, warns that advances in artificial intelligence could threaten the future of remote jobs, particularly those reliant on cognitive work.

As remote work becomes a staple in many people’s lives, a recent forecast from Shane Legg, co-founder and Chief AGI Scientist at Google DeepMind, raises significant concerns about its future. In an interview with Professor Hannah Fry, Legg suggested that rapid advancements in artificial intelligence (AI) could soon disrupt the landscape of work-from-home arrangements as we know them today.

Legg emphasized that jobs performed entirely online are likely to be the first to feel the impact of AI’s evolution. He noted that as AI approaches human-level capabilities, positions that primarily involve cognitive tasks and can be executed remotely are particularly at risk.

“Jobs that are purely cognitive and done remotely via a computer are particularly vulnerable,” Legg stated, highlighting his apprehension about the implications of AI on the workforce. He pointed out that as AI tools become increasingly sophisticated, companies may find they no longer require large teams spread across various locations.

In sectors like software engineering, Legg posited that what once necessitated a workforce of 100 engineers could potentially be managed by just 20 individuals leveraging advanced AI technologies. This shift, he warned, could lead to a reduction in overall job availability, with entry-level and remote positions likely to be the first casualties.

Legg also indicated that the impact of AI will not be uniform across all industries. He suggested that roles centered around digital skills—such as language, knowledge work, coding, mathematics, and complex problem-solving—are likely to experience the earliest pressures from AI advancements.

In many of these domains, AI systems are already outperforming human capabilities, particularly in areas like language processing and general knowledge. Legg anticipates rapid improvements in reasoning, visual understanding, and continuous learning, further intensifying competition for cognitive jobs.

Conversely, jobs that require physical, hands-on work—such as plumbing or construction—may remain insulated from these changes for a longer period, as automating real-world tasks presents significant challenges.

Legg went further to assert that AI has the potential to fundamentally reshape the economy by outperforming humans in cognitive tasks at a lower cost. As machines become capable of handling mental labor more efficiently, the traditional model of earning a living through intellectual work could come under significant strain, leaving many without conventional employment opportunities.

He cautioned against dismissing these developments, likening the situation to ignoring early warnings about major global threats. Legg stressed the importance of preparing for this impending shift now, rather than waiting until it is too late.

Despite his stark outlook regarding potential job losses, Legg also expressed optimism about the benefits AI could ultimately bring. He suggested that the technology might usher in a “golden age” characterized by substantial productivity gains, significant scientific breakthroughs, and overall economic growth.

The critical challenge, he argued, will be ensuring that the wealth generated by these advancements is equitably shared, allowing individuals to maintain a sense of purpose and security as the nature of work evolves. Legg underscored that while the transition will be gradual, the pace is expected to accelerate as AI achieves professional-level performance in knowledge-based roles.

As the conversation around AI and its implications for the workforce continues to evolve, the insights from Legg serve as a crucial reminder of the need for proactive engagement with the changes on the horizon.

According to The American Bazaar, the time to prepare for these shifts is now.

Alzheimer’s Disease May Be Reversed by Restoring Brain Balance, Study Finds

A study from University Hospitals suggests that restoring the brain’s energy molecule NAD+ may reverse Alzheimer’s disease in animal models, offering hope for future human applications.

A promising new method for reversing Alzheimer’s disease has emerged from research conducted at University Hospitals Cleveland Medical Center. The study reveals that restoring a central cellular energy molecule known as NAD+ in the brains of mice has the potential to reverse key markers of the disease, including cognitive decline and brain changes.

Researchers analyzed two different mouse models of Alzheimer’s, along with human brain tissue affected by the disease. They discovered significant declines in NAD+ levels, which is crucial for energy production, cell maintenance, and overall cell health. According to Dr. Andrew A. Pieper, the senior author of the study and director of the Brain Health Medicines Center at Harrington Discovery Institute, the decline of NAD+ is a natural part of aging.

“When NAD+ falls below necessary levels, cells cannot effectively perform essential maintenance and survival functions,” Dr. Pieper explained in an interview.

Dr. Charles Brenner, chief scientific advisor for Niagen, a company specializing in products that enhance NAD+ levels, emphasized the importance of this molecule. He noted that the brain consumes approximately 20% of the body’s energy and has a high demand for NAD+ to support cellular energy production and DNA repair. “NAD+ plays a key role in how neurons adapt to various physiological stressors and supports processes associated with brain health,” he stated.

The study utilized a medication called P7C3-A20 to restore normal NAD+ levels in the mouse models. Remarkably, this treatment not only blocked the onset of Alzheimer’s but also reversed the accumulation of amyloid and tau proteins in the brains of mice with advanced stages of the disease. Researchers reported a full restoration of cognitive function in these treated mice.

Additionally, the treated mice exhibited normalized blood levels of phosphorylated tau 217, a significant clinical biomarker used in human Alzheimer’s research. Dr. Pieper remarked, “For more than a century, Alzheimer’s has been considered irreversible. Our experiments provide proof of principle that some forms of dementia may not be inevitably permanent.”

The researchers were particularly impressed by the extent to which advanced Alzheimer’s was reversed in the mice when NAD+ homeostasis was restored, even without directly targeting amyloid plaques. “This gives reason for cautious optimism that similar strategies may one day benefit people,” Dr. Pieper added.

This research builds on previous findings from the lab, which demonstrated that restoring NAD+ balance could accelerate recovery following severe traumatic brain injury. The study, conducted in collaboration with Case Western Reserve University and the Louis Stokes Cleveland VA Medical Center, was published last week in the journal Cell Reports Medicine.

However, the researchers caution that the study’s findings are limited to mouse models and may not directly translate to human patients. “Alzheimer’s is a complex, multifactorial, uniquely human disease,” Dr. Pieper noted. “Efficacy in animal models does not guarantee the same results in human patients.”

While various drugs have been tested in clinical trials aimed at slowing the progression of Alzheimer’s, none have been evaluated for their potential to reverse the disease in humans. The authors also warned that over-the-counter NAD+-boosting supplements can lead to excessively high cellular NAD+ levels, which have been linked to cancer in some animal studies. Dr. Pieper explained that P7C3-A20 allows cells to restore and maintain appropriate NAD+ balance under stress without pushing levels too high.

For those considering NAD+-modulating supplements, Dr. Pieper recommends discussing the risks and benefits with a physician. He also highlighted proven lifestyle strategies that can promote brain resilience, including prioritizing sufficient sleep, following a MIND or Mediterranean diet, staying cognitively and physically active, maintaining social connections, addressing hearing loss, protecting against head injuries, limiting alcohol consumption, and managing cardiovascular risk factors such as avoiding smoking.

Looking ahead, the research team plans to further investigate the impact of brain energy balance on cognitive health and explore whether this strategy can be effective for other age-related neurodegenerative diseases, according to Fox News.

700Credit Data Breach Exposes Social Security Numbers of 5.8 Million Consumers

A data breach at fintech company 700Credit has compromised the personal information of over 5.8 million consumers, raising concerns about identity theft and financial fraud.

A significant data breach at fintech company 700Credit has exposed the personal information of more than 5.8 million individuals. This incident, which originated from a third-party integration partner rather than a direct compromise of 700Credit’s internal systems, highlights the ongoing risks associated with data security in the financial services sector.

The breach traces back to July 2025, when a threat actor compromised one of 700Credit’s third-party partners. During this intrusion, the attacker discovered an exposed application programming interface (API) that allowed access to sensitive customer information linked to auto dealerships using 700Credit’s services. Alarmingly, the integration partner failed to notify 700Credit about the breach, enabling unauthorized access to continue for several months.

It was not until October 25 that 700Credit detected suspicious activity within its systems, prompting an internal investigation. The company subsequently engaged third-party forensic specialists to assess the breach’s scope and identify the affected data. Their findings revealed that unauthorized copies of certain records had been made, specifically those related to customers of auto dealerships utilizing 700Credit’s platform.

Ken Hill, Managing Director of 700Credit, confirmed that approximately 20% of the consumer data accessible through the compromised system was stolen between May and October. While the company has not released a comprehensive list of the data fields involved, it has acknowledged that highly sensitive information, including Social Security numbers (SSNs), was exposed. The exposure of SSNs significantly heightens the risk of identity theft and financial fraud, as these numbers cannot be easily changed like a password.

In response to the breach, 700Credit has established a dedicated webpage detailing the incident and the types of information compromised. The company is also offering affected individuals 12 months of free identity protection and credit monitoring services through TransUnion. Those impacted have a 90-day window to enroll in this service after receiving notification of the breach.

This incident is not isolated; other platforms, including audio streaming service SoundCloud and adult video sharing site Pornhub, have also experienced data breaches linked to third-party vendors. While there is no evidence to suggest that the same vendor was involved in all three cases, these incidents underscore the risks associated with third-party access to sensitive consumer data.

When data breaches occur, the repercussions are not always immediate. Compromised data can linger in underground markets for months before being exploited. Therefore, it is crucial for individuals to take proactive measures to protect themselves. Strong antivirus software can help block malicious downloads and phishing attempts that often follow large data leaks. Additionally, using a password manager to generate unique passwords for each service can safeguard against further breaches.

Individuals should also check if their email addresses have been exposed in previous breaches. Many password managers now include built-in breach scanners that alert users if their information has appeared in known leaks. If a match is found, it is essential to change any reused passwords and secure those accounts with new, unique credentials.

Implementing two-factor authentication (2FA) for email, banking, social media, and cloud accounts can add an extra layer of security. Even if a password is compromised, 2FA requires a second verification step, making unauthorized access more difficult.

Monitoring services can alert individuals to new accounts, loans, or credit checks opened in their name, providing an opportunity to act before significant financial damage occurs. Identity theft protection services can also monitor personal information, such as SSNs, and alert users if their data is being sold on the dark web or used to open accounts fraudulently.

Furthermore, individuals should consider utilizing data removal services to reduce their digital footprint. While no service can guarantee complete removal of personal information from the internet, these services actively monitor and erase data from various websites, making it harder for attackers to profile and target individuals after a breach.

For those whose Social Security numbers are involved, a credit freeze is one of the most effective defenses. This measure prevents new credit accounts from being opened without the individual’s approval and can be temporarily lifted when necessary.

The incident at 700Credit serves as a stark reminder of the vulnerabilities associated with third-party APIs and integrations. When these partners fail to disclose breaches promptly, the downstream impact can be extensive. Individuals receiving notifications from 700Credit should take them seriously, enroll in the offered credit monitoring service, and review their credit reports for any suspicious activity.

As the digital landscape continues to evolve, the question remains: should companies be held accountable when a third-party vendor exposes customer information? This ongoing debate highlights the need for robust security measures and transparency in the handling of sensitive consumer data.

For further information on protecting yourself from identity theft and data breaches, visit CyberGuy.com.

Athena Lunar Lander Reaches Moon; Condition Still Uncertain

Athena lunar lander successfully reached the moon, but mission controllers remain uncertain about its condition and exact landing location.

Mission controllers have confirmed that the Athena lunar lander successfully touched down on the moon earlier today. However, the status of the spacecraft remains unknown, according to reports from the Associated Press.

While the lander’s landing was confirmed, details regarding its condition and the precise location of its touchdown are still unclear. The Athena lander, developed by Intuitive Machines, was equipped with an ice drill, a drone, and two rovers.

Despite the uncertainty surrounding its status, officials reported that Athena appeared to be able to communicate with its controllers. Tim Crain, the mission director and co-founder of Intuitive Machines, was heard instructing his team to “keep working on the problem,” even as the craft sent apparent “acknowledgments” back to the team in Texas.

The live stream of the mission was concluded by NASA and Intuitive Machines, who announced plans to hold a news conference later today to provide updates on Athena’s status.

This mission follows a recent successful landing by Firefly Aerospace’s Blue Ghost, which touched down on the moon on Sunday. Blue Ghost’s landing marked a significant achievement, making Firefly Aerospace the first private company to successfully place a spacecraft on the moon without it crashing or landing sideways.

Last year, Intuitive Machines faced challenges with its Odysseus lander, which landed sideways, adding pressure to the current mission. Athena is the second lunar lander to reach the moon this week, following Blue Ghost’s successful touchdown.

As the situation develops, further information about Athena’s condition and mission objectives is anticipated during the upcoming news conference, according to the Associated Press.

Pornhub Experiences Major Data Leak Exposing 200 Million User Records

Pornhub is facing a significant data breach, with the hacking group ShinyHunters claiming to have stolen 94GB of user data affecting over 200 million records and demanding a Bitcoin ransom.

Pornhub is grappling with the aftermath of a massive data leak, as the hacking group ShinyHunters has claimed responsibility for stealing 94GB of user data. This breach reportedly affects more than 200 million records, and the group is now attempting to extort the company for a ransom in Bitcoin.

According to reports from BleepingComputer, ShinyHunters has threatened to publish the stolen data if their demands are not met. Pornhub has acknowledged the situation but insists that its core systems were not compromised during the breach.

The exposed data primarily pertains to Pornhub Premium users. While no financial information was included, the dataset contains sensitive activity details that raise serious privacy concerns. The hackers claim that the stolen records include activity logs that indicate whether users watched or downloaded videos or viewed specific channels. Additionally, search histories are part of the compromised data, heightening the potential privacy risks if this information is made public.

This breach appears to be linked to a previous security incident involving Mixpanel, a data analytics vendor that had worked with Pornhub. That earlier incident occurred in November 2025, following a smishing attack that allowed threat actors access to Mixpanel’s systems. However, Mixpanel has stated that it does not believe the data stolen from Pornhub originated from that incident. The company has found no evidence that Pornhub data was taken during its November breach. Furthermore, Pornhub clarified that it ceased its relationship with Mixpanel in 2021, suggesting that the stolen data may be several years old.

To verify the claims, Reuters reached out to some Pornhub users, who confirmed that the data associated with their accounts was accurate but outdated, consistent with the timeline provided by Mixpanel.

In response to the reports, Pornhub has moved quickly to reassure its users. In a security notice, the company stated, “This was not a breach of Pornhub Premium’s systems. Passwords, payment details, and financial information remain secure and were not exposed.” This clarification helps to mitigate the immediate risk of financial fraud; however, the exposure of viewing habits and search activity still poses long-term privacy risks.

ShinyHunters has been linked to several high-profile data breaches this year, employing social engineering tactics such as phishing and smishing to infiltrate corporate systems. Once inside, the group typically steals large datasets and uses extortion threats to coerce companies into paying ransoms. This strategy has impacted businesses and users globally.

Pornhub has updated its online statement to alert Premium members about potential direct contact from cybercriminals. In cases involving adult platforms, such outreach often escalates into sextortion attempts, where criminals threaten to expose private activities unless victims comply with their demands. The company advised users, “We are aware that the individuals responsible for this incident have threatened to contact impacted Pornhub Premium users directly. You may therefore receive emails claiming they have your personal information. As a reminder, we will never ask for your password or payment information by email.”

As one of the world’s most visited adult video platforms, Pornhub allows users to view content anonymously or create accounts to upload and interact with videos. Even though the stolen data is several years old, users are encouraged to take this opportunity to enhance their digital security.

To bolster security, users should start by updating their Pornhub passwords. It is also advisable to change the passwords for any email or payment accounts linked to Pornhub. Utilizing a password manager can simplify the process of creating and storing strong, unique passwords.

Additionally, users should check if their email addresses have been exposed in previous breaches. A reliable password manager often includes a built-in breach scanner that alerts users if their email addresses or passwords have appeared in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Data breaches frequently lead to follow-up scams. Users should remain cautious of emails, texts, or phone calls referencing Pornhub or account issues. It is essential to avoid clicking on links, downloading attachments, or sharing personal information unless the source can be verified. Installing robust antivirus software adds another layer of protection against malicious links and downloads.

Data removal services can assist in removing personal information from data broker websites that collect and sell details such as email addresses, locations, and online identifiers. If leaked data from this breach is shared or resold, removing personal information can make it more challenging for scammers to connect it to individuals.

Identity theft protection companies can monitor personal information, such as Social Security Numbers, phone numbers, and email addresses, alerting users if their data is being sold on the dark web or used to open accounts. Early warnings can help mitigate damage if personal data surfaces.

Using a VPN can help protect browsing activity by masking IP addresses and encrypting internet traffic, which is particularly relevant in cases like this, where exposed activity data may include location signals or usage patterns. While a VPN cannot erase past exposure, it reduces the visibility of new information and complicates the linking of future activity to individuals.

The recent data leak at Pornhub underscores the risks associated with long-stored user information. Although passwords and payment details were not compromised, the exposure of activity data can still have damaging consequences. ShinyHunters has demonstrated a willingness to exert pressure through public threats, highlighting the importance of remaining vigilant and proactive about online security.

Should companies be allowed to retain years of user activity data once it is no longer necessary? This question remains open for discussion as the implications of such data storage continue to unfold. For further insights, readers can visit CyberGuy.com.

Apple Addresses Two Zero-Day Vulnerabilities Exploited in Targeted Attacks

Apple has issued urgent security updates to address two zero-day vulnerabilities in WebKit, which were actively exploited in targeted attacks against specific individuals.

Apple has released emergency security updates to address two zero-day vulnerabilities that were actively exploited in highly targeted attacks. The company characterized these incidents as “extremely sophisticated,” aimed at specific individuals rather than the general public. While Apple did not disclose the identities of the attackers or victims, the limited scope of the attacks suggests they may be linked to spyware operations rather than widespread cybercrime.

Both vulnerabilities affect WebKit, the browser engine that powers Safari and all browsers on iOS devices. This raises significant risks, as simply visiting a malicious webpage could trigger an attack. The vulnerabilities are tracked as CVE-2025-43529 and CVE-2025-14174, and Apple confirmed that both were exploited in the same real-world attacks.

CVE-2025-43529 is a WebKit use-after-free vulnerability that can lead to arbitrary code execution when a device processes maliciously crafted web content. Essentially, this flaw allows attackers to execute their own code on a device by tricking the browser into mishandling memory. Google’s Threat Analysis Group discovered this vulnerability, which often indicates involvement from nation-state or commercial spyware entities.

The second vulnerability, CVE-2025-14174, also pertains to WebKit and involves memory corruption. Although Apple describes the impact as memory corruption rather than direct code execution, such vulnerabilities are frequently chained with others to fully compromise a device. This issue was discovered jointly by Apple and Google’s Threat Analysis Group.

Apple acknowledged that it was aware of reports confirming active exploitation in the wild, a statement that is particularly significant as it typically indicates that attacks have already occurred rather than merely presenting theoretical risks. The company addressed these vulnerabilities through improved memory management and enhanced validation checks, although it did not provide detailed technical information that could assist attackers in replicating the exploits.

The patches have been released across all of Apple’s supported operating systems, including the latest versions of iOS, iPadOS, macOS, Safari, watchOS, tvOS, and visionOS. Affected devices include iPhone 11 and newer models, multiple generations of iPad Pro, iPad Air from the third generation onward, the eighth-generation iPad and newer, and the iPad mini starting with the fifth generation. This update covers the vast majority of iPhones and iPads currently in use.

The fixes are available in iOS 26.2 and iPadOS 26.2, as well as in earlier versions such as iOS 18.7.3 and iPadOS 18.7.3, macOS Tahoe 26.2, tvOS 26.2, watchOS 26.2, visionOS 26.2, and Safari 26.2. Since Apple mandates that all iOS browsers utilize WebKit, the underlying issues also affected Chrome on iOS.

In light of these highly targeted zero-day attacks, users are encouraged to take several practical steps to enhance their security. First and foremost, it is crucial to install emergency updates as soon as they are available. Delaying updates can provide attackers with the window they need to exploit vulnerabilities. For those who often forget to update their devices, enabling automatic updates for iOS, iPadOS, macOS, and Safari can help ensure ongoing protection.

Most WebKit exploits begin with malicious web content, so users should exercise caution when clicking on links received via SMS, WhatsApp, Telegram, or email, especially if they are unexpected. If something seems off, it is safer to manually type the website address into the browser.

Installing antivirus software on all devices is another effective way to safeguard against malicious links that could install malware or compromise personal information. Antivirus programs can also alert users to phishing emails and ransomware scams, providing an additional layer of protection for personal data and digital assets.

For individuals who are journalists, activists, or handle sensitive information, reducing their attack surface is advisable. This can include using Safari exclusively, avoiding unnecessary browser extensions, and limiting the frequency of opening links within messaging apps. Apple’s Lockdown Mode is specifically designed for targeted attacks, restricting certain web technologies and blocking most message attachments.

Another proactive measure is to minimize personal data available online. The more information that is publicly accessible, the easier it is for attackers to profile potential targets. Users can reduce their visibility by removing data from broker sites and tightening privacy settings on social media platforms.

While no service can guarantee complete removal of personal data from the internet, utilizing a data removal service can be a smart choice. These services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and reducing the risk of being targeted by scammers.

Users should also be aware of warning signs that their devices may be compromised, such as unexpected crashes, overheating, or sudden battery drain. While these symptoms do not automatically indicate a security breach, consistent issues warrant immediate updates and potentially resetting the device.

Although Apple has not disclosed specific details regarding the individuals targeted or the methods of attack, the pattern aligns closely with previous spyware campaigns that have focused on journalists, activists, political figures, and others of interest to surveillance operators. With these recent patches, Apple has now addressed seven zero-day vulnerabilities exploited in the wild in 2025 alone, including flaws disclosed earlier this year and a backported fix in September for older devices.

Have you installed the latest iOS or iPadOS update yet, or are you still putting it off? Let us know by writing to us at Cyberguy.com.

According to CyberGuy.com, staying informed and proactive about security updates is essential for protecting personal devices against targeted attacks.

Tesla Faces Investigation by U.S. Auto Safety Regulator

Tesla is under investigation by the NHTSA over potential safety concerns related to the emergency door release design in its Model 3 vehicles, raising questions about passenger safety in emergencies.

Tesla is facing scrutiny from the U.S. auto safety regulator, the National Highway Traffic Safety Administration (NHTSA), regarding the emergency door release design in its Model 3 compact sedans. The investigation was announced on December 23, following a defect petition that raised concerns about the accessibility and visibility of the emergency door release controls during critical situations.

The NHTSA’s inquiry focuses on whether the placement, labeling, and overall design of the emergency door release could pose a safety risk. In emergencies such as crashes, fires, or power failures, it is crucial for passengers to exit the vehicle quickly and safely. However, reports have indicated that the mechanical door release in the Model 3 may be hidden, unlabeled, and not intuitive for occupants unfamiliar with the vehicle.

In Tesla Model 3 vehicles, doors are primarily opened using electronic buttons instead of traditional handles. While mechanical emergency releases are included in the design, some users have reported difficulty locating these releases under stress or in low-visibility conditions. This has prompted the NHTSA to take a closer look at the situation.

The NHTSA’s defect investigations are preliminary steps in the regulatory process and do not automatically lead to a recall. During this investigation, the agency will collect data, review consumer complaints, analyze the vehicle’s design, and may request additional information from Tesla. If a safety-related defect is identified, Tesla could be required to issue a recall or implement design changes to mitigate the issue.

As of now, Tesla has not acknowledged any wrongdoing. The company has consistently maintained that its vehicles comply with all applicable safety standards. Supporters of Tesla’s design philosophy argue that simplified interiors reduce clutter and that the emergency releases are adequately documented in owner manuals.

This investigation underscores a larger conversation within the automotive industry as vehicles increasingly rely on software-driven designs. As manufacturers move away from traditional mechanical controls, regulators are paying closer attention to how design choices impact usability and safety in emergency situations. The outcome of this investigation could have significant implications not only for Tesla but also for other automakers exploring similar minimalist design approaches.

While inquiries like this do not inherently indicate fault, they serve as important reminders that user experience during emergencies is a critical aspect of overall vehicle safety. The findings from this review may influence how manufacturers balance innovation with accessibility, potentially shaping future design standards across the automotive industry.

According to The American Bazaar, the investigation reflects ongoing concerns about passenger safety in modern vehicles.

China Launches National Venture Capital Fund to Enhance Innovation

China has launched three state-backed venture capital funds aimed at enhancing innovation in hard technology and strategic emerging industries, with each fund exceeding 50 billion yuan.

China is making significant strides in the realm of hard technology. According to state broadcaster CCTV, the country officially unveiled three venture capital funds on Friday, designed to invest in various “hard technology” sectors.

The funds, each with a capital contribution exceeding 50 billion yuan (approximately $7.14 billion), were jointly initiated by the National Development and Reform Commission (NDRC) and the Ministry of Finance. Three regional sub-funds have been established in key areas: the Beijing–Tianjin–Hebei region, the Yangtze River Delta, and the Guangdong–Hong Kong–Macao Greater Bay Area.

Bai Jingyu, an official from the NDRC, stated that the initiative aims to leverage central government capital to attract investments from local governments, state-owned enterprises, financial institutions, and private investors. During a press conference, Bai emphasized that the funds will enhance support for strategic emerging industries and expedite the development of new productive forces.

The term “hard technology” encompasses sectors that are capital-intensive, research-heavy, and strategically vital, including semiconductors, advanced manufacturing, artificial intelligence, new materials, biotechnology, aerospace, and high-end equipment.

Unlike consumer internet or platform-based businesses, these sectors often necessitate longer investment horizons and sustained policy support before yielding commercial returns. By establishing large, state-backed venture capital funds, China aims to address the funding challenges faced by early-stage and growth-stage hard-tech firms.

According to reports from Reuters, the funds will primarily target early-stage startups valued at less than 500 million yuan, with no single investment exceeding 50 million yuan.

In recent years, Chinese policymakers have underscored the importance of “technological self-reliance,” particularly in critical areas such as semiconductor manufacturing and industrial software. Substantial venture capital backing can play a pivotal role in supporting startups through lengthy research and development cycles, facilitating production scaling, and connecting them with industrial partners.

The funds are expected to focus on companies engaged in integrated circuits, quantum technology, biomedicine, brain-computer interfaces, aerospace, and other essential hard technologies.

The substantial scale of these funds, each reportedly surpassing 50 billion yuan, reflects a growing confidence in the efficacy of venture investment as a policy instrument. Large fund sizes may enable diversified portfolios across multiple sub-sectors while allowing for significant investments in promising companies. Additionally, they may attract private capital by mitigating perceived risks and signaling official support for targeted industries.

However, experts caution that the success of these funds will hinge on professional management, clear investment criteria, and market-oriented decision-making. Merely allocating capital will not suffice; achieving successful outcomes will require robust governance and the ability to identify commercially viable technologies.

The launch of these three venture capital funds underscores China’s commitment to accelerating advancements in hard technology. As global competition in advanced industries intensifies, such initiatives are poised to play an increasingly crucial role in shaping the country’s innovation landscape and long-term economic growth.

Ultimately, the effectiveness of this strategy will depend on its execution, governance, and responsiveness to market dynamics. Nevertheless, this initiative signifies an effort to cultivate an ecosystem where high-risk, high-impact innovation can thrive. Over time, sustained support for hard technology could bolster industrial capabilities, enhance supply-chain security, and foster new engines of economic growth. More broadly, it illustrates how targeted financial mechanisms are increasingly utilized as tools to guide national development and secure a competitive edge in emerging technologies.

According to Reuters, the establishment of these funds marks a pivotal moment in China’s strategy to enhance its technological capabilities.

Spectacular Blue Spiral Light Likely Caused by SpaceX Rocket Launch

A stunning blue light, likely caused by a SpaceX Falcon 9 rocket, illuminated the night sky over Europe on Monday, captivating viewers and sparking widespread discussion on social media.

A mesmerizing blue light, resembling a cosmic whirlpool, brightened the night skies over Europe on Monday. This spectacular phenomenon was likely the result of the SpaceX Falcon 9 rocket booster re-entering the Earth’s atmosphere, according to experts.

Time-lapse footage captured from Croatia around 4 p.m. EST (9 p.m. local time) showcased the glowing spiral as it spun across the sky. Many social media users compared the sight to a spiral galaxy, with the full video lasting approximately six minutes at normal speed.

The U.K.’s Met Office reported receiving numerous accounts of an “illuminated swirl in the sky.” They attributed the phenomenon to the SpaceX rocket that had launched from Cape Canaveral, Florida, at approximately 1:50 p.m. EST as part of the classified NROL-69 mission for the National Reconnaissance Office (NRO).

“This is likely to be caused by the SpaceX Falcon 9 rocket, launched earlier today,” the Met Office stated on X. “The rocket’s frozen exhaust plume appears to be spinning in the atmosphere and reflecting sunlight, causing it to appear as a spiral in the sky.”

This glowing light is an example of what some refer to as a “SpaceX spiral,” according to Space.com. Such spirals occur when the upper stage of a Falcon 9 rocket separates from its first-stage booster. As the upper stage continues its ascent into space, the lower stage descends, spiraling back to Earth while releasing any remaining fuel.

The fuel, upon reaching high altitudes, freezes almost instantly. Sunlight reflects off the frozen exhaust, resulting in the striking glow observed in the sky.

Fox News Digital reached out to SpaceX for comment but did not receive an immediate response. This cosmic display occurred just days after a SpaceX team collaborated with NASA to successfully return two stranded astronauts to Earth.

According to Space.com, the captivating blue spiral is a reminder of the complexities and wonders of space travel, as well as the innovative technology employed by SpaceX in its missions.

Most Parked Domains Are Now Promoting Scams and Malware

Recent research indicates that over 90 percent of parked domains now redirect users to scams and malware, highlighting the dangers of simple typos when entering web addresses.

Typing a web address directly into your browser may seem like a harmless practice, but new research suggests it has become one of the riskiest activities online. A study conducted by cybersecurity firm Infoblox reveals a significant shift in the landscape of parked domains, with most now redirecting visitors to scams, malware, or deceptive security warnings.

Parked domains are essentially unused or expired web addresses. They can arise from a variety of reasons, including forgotten renewals or deliberate misspellings of popular sites such as Google, Netflix, or YouTube. For years, these domains displayed benign placeholder pages that featured ads and links to monetize accidental traffic. However, this is no longer the case. Infoblox found that more than 90 percent of visits to parked domains now lead to malicious content, including scareware, fake antivirus offers, phishing pages, and malware downloads.

Direct navigation, which involves typing a website address manually instead of using bookmarks or search results, can have dire consequences. A simple typo can redirect users to harmful sites without triggering an error message. For instance, mistyping gmail.com as gmai.com may not produce an error, but it could send your email directly to cybercriminals. Infoblox discovered that some of these typo domains actively operate mail servers to capture messages. Alarmingly, many of these domains are part of extensive portfolios, with one group controlling nearly 3,000 lookalike domains associated with banks, tech companies, and government services.

The experience of visiting a parked domain can vary significantly from user to user, and this is intentional. Researchers found that parked pages often profile visitors in real time, analyzing their IP address, device type, location, cookies, and browsing behavior. Based on this data, the domain determines what content to display next. Users accessing the internet through a VPN or non-residential connection may see harmless placeholder pages, while residential users on personal devices are more likely to be redirected to scams or malware. This filtering mechanism allows attackers to remain hidden while maximizing the success of their schemes.

Several trends contribute to the growing prevalence of malicious parked domains. First, traffic from these domains is frequently resold multiple times through affiliate networks. By the time it reaches a malicious advertiser, there is often no direct relationship with the original parking company. Additionally, recent changes in advertising policies may have inadvertently increased exposure to these threats. For instance, Google now requires advertisers to opt in before running ads on parked domains, a move intended to enhance safety that may have pushed bad actors deeper into affiliate networks with less oversight. This has resulted in a murky ecosystem where accountability is difficult to trace.

Infoblox also identified instances of typosquatting targeting government services. In one case, a researcher mistakenly visited ic3.org instead of ic3.gov while attempting to report a crime. The result was a fake warning page claiming that a cloud subscription had expired, which could have easily delivered malware. This incident underscores how easily users can fall into these traps, even when trying to perform important tasks.

To mitigate the risks associated with parked domains, users can adopt several smart habits. First, save the web addresses of banks, email providers, and government portals to avoid typing them manually. Additionally, take your time when entering web addresses; an extra second can prevent costly mistakes. Strong antivirus software is also essential, as it can protect devices from malicious pages by blocking malware downloads, scripts, and fake security pop-ups.

While no service can guarantee complete removal of personal data from the internet, employing a data removal service can be a wise choice. These services actively monitor and systematically erase personal information from numerous websites, reducing the risk of scammers cross-referencing data from breaches with information available on the dark web. By limiting the information accessible to potential attackers, users can make it more challenging for them to target individuals.

Be cautious of fake warnings about expired subscriptions or infected devices, as legitimate companies do not use panic-inducing screens. Regular security updates can also close the loopholes that attackers exploit for malicious redirects. Although not a complete solution, using a VPN can help reduce exposure to targeted redirects linked to residential IP addresses.

The web has evolved in subtle yet dangerous ways. Parked domains have transitioned from passive placeholders to active delivery systems for scams and malware. The most alarming aspect is how little effort it takes to trigger an attack; a simple typo can lead to significant consequences. As threats become quieter and more automated, maintaining safe browsing habits is more important than ever.

Have you ever mistyped a web address and ended up on a suspicious site, or do you rely entirely on bookmarks now? Share your experiences with us at Cyberguy.com.

According to Infoblox, the landscape of parked domains poses a growing threat to online safety.

New Scam Targets iPhone Owners, Tricks Them into Giving Phones Away

Scammers are exploiting new iPhone purchases by using pressure tactics and fake carrier calls to trick owners into returning their devices under false pretenses.

Receiving a brand-new iPhone should be a moment of excitement and joy. However, recent reports indicate that scammers are targeting new iPhone owners, turning this experience into a potential nightmare.

In the past few weeks, numerous individuals have reported receiving unsolicited phone calls shortly after activating their new devices. The callers, who claim to represent major carriers, assert that a shipping error has occurred and demand the immediate return of the phone. One particular incident highlights the aggressive tactics employed by these scammers, showcasing how convincing they can be.

These scams rely heavily on timing and pressure. Criminals often target individuals who have recently purchased new iPhones, a tactic made possible by accessing data from various sources, including data-broker sites and leaked purchase information. To further enhance their credibility, scammers spoof carrier phone numbers, making it appear as though the call is legitimate. They often possess specific details about the device model, which adds to their convincing facade.

Once the call begins, the scammer quickly presents a fabricated story about a shipping mistake. They insist that the phone must be returned immediately, claiming that a courier is already scheduled to pick it up. If the victim follows these instructions, they unwittingly hand over their brand-new iPhone, which the scammer then either resells or dismantles for parts. By the time the victim realizes something is amiss, recovery of the device is often impossible.

This scam mimics real customer service processes, as legitimate carriers do ship replacement phones and utilize services like FedEx for returns. Scammers blend these facts with a sense of urgency, counting on victims to act before verifying the legitimacy of the call. They exploit the common assumption that a phone call appearing to come from a legitimate source must indeed be real.

Recognizing the warning signs of this scam can help individuals protect themselves. Key indicators include unsolicited calls regarding returns that were never requested, pressure to act quickly, instructions to leave the phone outside, promises of gift cards for cooperation, and follow-up calls urging immediate action. It is crucial to remember that legitimate carriers do not conduct returns in this manner.

To safeguard against these scams, it is essential to slow down and verify any claims made during such calls. Scammers thrive on speed and confusion, so taking a moment to pause can make a significant difference. Hang up and contact your carrier directly using the number listed on your bill or their official website. If there is a legitimate issue, they will confirm it.

Legitimate returns typically involve tracked shipping labels associated with your account. Carriers will never ask you to leave your phone on a porch or doorstep. Any demand for immediate action should raise red flags.

Scammers often have access to personal data, making it easier for them to target victims. To mitigate this risk, individuals can consider using data removal services that help eliminate personal information from data broker sites. While no service can guarantee complete removal of data from the internet, these services can significantly reduce exposure and make it more challenging for scammers to cross-reference information.

Additionally, employing strong antivirus software can provide another layer of protection. Many antivirus tools can block scam calls, warn about phishing attempts, and alert users to suspicious activity before any damage occurs. Keeping your devices protected with reliable antivirus software is crucial in safeguarding personal information and digital assets.

It is also advisable to keep records of voicemails, phone numbers, and timestamps related to suspicious calls. This information can assist carriers in warning other customers and identifying repeat scams. Criminals often reuse the same tactics, and sharing warnings with friends and family can help prevent future victims.

As scams targeting new iPhone owners become increasingly sophisticated and aggressive, the simplest defense remains the most effective: verify before you act. If you receive a call pressuring you to return your device, take a moment to pause and contact the company directly. This one step could save you from significant financial loss and frustration.

In a world where urgency can cloud judgment, it is vital to remain vigilant. If a carrier were to call you tomorrow claiming an issue with your new phone, would you take the time to verify their claims, or would you succumb to the pressure? The choice could make all the difference.

For more information on protecting yourself from scams and to receive tech tips and security alerts, visit CyberGuy.com.

Nvidia Licenses Technology from Groq and Expands Executive Team

Nvidia has entered a licensing agreement with Groq, acquiring its technology and key executives while allowing Groq to remain an independent entity.

Nvidia has announced a significant licensing agreement with the startup Groq, which includes the hiring of Groq’s CEO and other key executives. This development was detailed in a blog post by Groq, highlighting a trend where major tech companies engage with promising startups to leverage their technology and talent without outright acquisitions.

Groq is known for its specialization in “inference,” a process that involves artificial intelligence models responding to user queries after they have been trained. While Nvidia has established dominance in the AI training sector, it faces increasing competition from both established rivals and emerging startups like Groq and Cerebras Systems.

The agreement has been characterized by Groq as a “non-exclusive licensing agreement” for its inference technology. Groq emphasized that this partnership reflects a mutual commitment to enhancing access to high-performance, cost-effective inference solutions.

As part of this deal, Jonathan Ross, Groq’s Founder, and Sunny Madra, Groq’s President, along with other members of the Groq team, will transition to Nvidia to help advance and scale the licensed technology. Despite these changes, Groq will continue to operate independently under the leadership of Simon Edwards, who will assume the role of CEO.

A source close to Nvidia confirmed the agreement, although Groq has not disclosed any financial details related to the deal. Reports from CNBC suggested that Nvidia had considered acquiring Groq for $20 billion in cash, but neither company has commented on this speculation.

Bernstein analyst Stacy Rasgon noted in a recent client communication that antitrust concerns could pose a significant risk in this arrangement. However, by structuring the deal as a non-exclusive license, Nvidia may maintain the appearance of competition, even as Groq’s leadership and technical talent transition to Nvidia.

Groq has seen substantial growth, more than doubling its valuation to $6.9 billion from $2.8 billion since August of last year, following a $750 million funding round in September. The company distinguishes itself by not relying on external high-bandwidth memory chips, which has insulated it from the memory shortages currently affecting the global chip industry. Instead, Groq utilizes on-chip memory known as SRAM, which accelerates interactions with chatbots and other AI models, albeit at the cost of limiting the size of the models it can serve.

In the competitive landscape, Groq’s main rival is Cerebras Systems, which is reportedly planning to go public next year. Both companies have secured significant contracts in the Middle East, further solidifying their positions in the market.

Nvidia’s CEO, Jensen Huang, recently delivered his most important keynote address of the year, emphasizing the company’s strategy to maintain its leadership as the AI market transitions from training to inference.

This licensing agreement with Groq marks another strategic move for Nvidia as it seeks to bolster its capabilities in the rapidly evolving AI landscape, ensuring that it remains at the forefront of technological advancements.

For further details, refer to Reuters.

Trump’s ‘Tech Force’ Initiative Receives Approximately 25,000 Applications

Approximately 25,000 individuals have applied to join the Trump administration’s “Tech Force,” aimed at enhancing federal expertise in artificial intelligence and technology.

Around 25,000 people have expressed interest in joining the “Tech Force,” a new initiative by the Trump administration designed to recruit engineers and technology specialists with expertise in artificial intelligence (AI) for federal roles.

The U.S. Office of Personnel Management (OPM) announced that it will use the applications to recruit software engineers, data scientists, and other tech professionals. This figure was confirmed by a senior official within the Trump administration, as reported by Reuters.

The program aims to enlist approximately 1,000 engineers, data scientists, and AI specialists to work on critical technology projects across various government agencies. Participants, referred to as “fellows,” will engage in assignments that include AI implementation, application development, and data modernization.

Scott Kupor, director of OPM, noted that candidates will compete for 1,000 positions in the inaugural Tech Force cohort. The selected recruits will spend two years working on technology projects within federal agencies, including the Departments of Homeland Security, Veterans Affairs, and Justice, among others.

Members of the Tech Force will commit to a two-year employment program, collaborating with teams that report directly to agency leaders. This initiative also involves partnerships with leading technology companies such as Amazon Web Services, Apple, Dell Technologies, Microsoft, Nvidia, OpenAI, Palantir, Oracle, and Salesforce.

Upon completion of the two-year program, participants will have the opportunity to seek full-time positions with these private sector partners, who have pledged to consider alumni for employment. Additionally, private companies can nominate their employees to participate in government service stints.

This initiative was unveiled shortly after President Donald Trump signed an executive order aimed at preventing state-level AI regulations and establishing a unified national law. It reflects the administration’s commitment to maintaining American leadership in the AI sector.

According to CNBC, annual salaries for these positions are expected to range from $150,000 to $200,000, along with benefits.

Applications for the Tech Force opened on Monday through federal hiring channels, with OPM responsible for initial résumé screenings and technical assessments before agencies make final hiring decisions. Kupor aims to have the first cohort onboarded by the end of March 2026.

However, the initiative has faced criticism regarding its timing and structure. Max Stier, CEO of the Partnership for Public Service, a nonprofit advocating for federal workers, expressed concerns to Axios about the program’s overlap with previous initiatives undertaken by the U.S. Digital Service, which was disbanded by the current administration.

Rob Shriver, former acting OPM director and current managing director at Democracy Forward, raised questions about potential conflicts of interest. He highlighted concerns regarding private sector employees working on government projects while retaining their company stock holdings.

This ambitious hiring campaign reflects the Trump administration’s strategy to bolster federal capabilities in technology and AI, amidst ongoing debates about the implications of such initiatives.

For further details, refer to Reuters.

Wolf Species Extinct for 12,500 Years Revived, US Company Claims

A Dallas-based company claims to have resurrected the dire wolf, an extinct species made famous by “Game of Thrones,” using advanced genetic technologies.

A U.S. company has announced a groundbreaking achievement: the resurrection of the dire wolf, a species that last roamed the Earth over 12,500 years ago. This ambitious project has garnered attention not only for its scientific implications but also for its connection to the popular HBO series “Game of Thrones,” where dire wolves are depicted as larger and more intelligent than their modern counterparts.

Colossal Biosciences, based in Dallas, claims to have successfully brought back three dire wolves through a combination of genome-editing and cloning technologies. While the company heralds this as the world’s first successful “de-extincted animal,” some experts argue that what has been created is more accurately described as genetically modified wolves rather than true re-creations of the ancient apex predator.

Historically, dire wolves inhabited the American midcontinent during the Ice Age, with the oldest confirmed fossil dating back approximately 250,000 years, found in Black Hills, South Dakota. In “Game of Thrones,” these wolves are portrayed as fiercely loyal companions to the Stark family, further embedding them into popular culture.

The three litters produced by Colossal include two adolescent males named Romulus and Remus, along with a female puppy named Khaleesi. The process began with the extraction of blood cells from a living gray wolf, which were then modified using CRISPR technology—short for “clustered regularly interspaced short palindromic repeats.” This technique allowed scientists to make genetic edits at 20 different sites, resulting in traits reminiscent of the dire wolf, such as larger body sizes and longer, lighter-colored fur, adaptations believed to have aided their survival in cold climates.

Of the 20 genome edits made, 15 correspond to genes found in actual dire wolves. The ancient DNA used for these modifications was sourced from two fossils: a tooth from Sheridan Pit, Ohio, estimated to be around 13,000 years old, and an inner ear bone from American Falls, Idaho, dating back approximately 72,000 years.

Once the genetic material was prepared, it was transferred into an egg cell from a domestic dog. The embryos were then implanted into surrogate dogs, and after a gestation period of 62 days, the genetically engineered pups were born.

Ben Lamm, CEO of Colossal Biosciences, described this achievement as a significant milestone, emphasizing that it demonstrates the effectiveness of the company’s comprehensive de-extinction technology. “It was once said, ‘any sufficiently advanced technology is indistinguishable from magic,’” Lamm stated. “Today, our team gets to unveil some of the magic they are working on and its broader impact on conservation.”

Colossal Biosciences has previously announced similar projects aimed at genetically altering living species to create animals resembling extinct species such as woolly mammoths and dodos. In conjunction with the announcement about the dire wolves, the company also revealed the birth of two litters of cloned red wolves, the most critically endangered wolf species in the world. This development is seen as evidence of the potential for conservation through de-extinction technology.

In late March, Colossal’s team met with officials from the Interior Department to discuss their projects. Interior Secretary Doug Burgum praised the work on social media, calling it a “thrilling new era of scientific wonder.” However, some scientists have expressed skepticism regarding the feasibility of fully restoring extinct species.

Corey Bradshaw, a professor of global ecology at Flinders University in Australia, voiced concerns about the claims made by Colossal. “So yes, they have slightly genetically modified wolves, maybe, and that’s probably the best that you’re going to get,” Bradshaw commented. “And those slight modifications seem to have been derived from retrieved dire wolf material. Does that make it a dire wolf? No. Does it make a slightly modified gray wolf? Yes. And that’s probably about it.”

Colossal Biosciences has stated that the wolves are currently thriving in a 2,000-acre secure ecological preserve in Texas, certified by the American Humane Society and registered with the USDA. Looking ahead, the company plans to restore the species in secure ecological preserves, potentially on indigenous lands, as part of its long-term vision for conservation.

This ambitious project raises important questions about the ethics and feasibility of de-extinction, as well as the implications for biodiversity and conservation efforts moving forward. As the conversation continues, the intersection of technology and nature remains a topic of great interest and debate in the scientific community, according to Fox News.

New Malware Threat Can Read Chats and Steal Money

A new Android banking trojan named Sturnus poses significant threats by stealing credentials, reading encrypted messages, and controlling devices, raising alarms in the cybersecurity community.

A new Android banking trojan known as Sturnus is emerging as a formidable threat in the cybersecurity landscape. Although still in its early development stages, Sturnus exhibits capabilities that resemble those of a fully operational malware program.

Once it infects a device, Sturnus can take over the screen, steal banking credentials, and even read encrypted messages from trusted applications. What makes this malware particularly concerning is its ability to operate quietly in the background. Users may believe their messages are secure due to end-to-end encryption, but Sturnus patiently waits for the phone to decrypt these messages before capturing them. Importantly, it does not break encryption; instead, it intercepts messages after they have been decrypted on the device.

According to cybersecurity research firm ThreatFabric, Sturnus employs multiple layers of attack that provide the operator with nearly complete visibility into the infected device. It utilizes HTML overlays that mimic legitimate banking applications, tricking users into entering their credentials. Any information entered is immediately sent to the attacker through a WebView that forwards the data without delay.

In addition to overlays, Sturnus employs an aggressive keylogging system via the Android Accessibility Service. This feature allows it to capture text as users type, track which applications are open, and map every user interface element on the screen. Even if applications block screenshots, the malware continues to monitor the UI tree in real time, enabling it to reconstruct user activity.

Sturnus also monitors popular messaging applications such as WhatsApp, Telegram, and Signal. It waits for these apps to decrypt messages locally before capturing the text displayed on the screen. Consequently, while chats may remain encrypted during transmission, Sturnus gains access to the entire conversation once the message is visible on the device.

Furthermore, the malware includes a comprehensive remote control feature that allows live screen streaming and a more efficient mode that transmits only interface data. This capability enables precise taps, text injection, scrolling, and permission approvals without alerting the victim.

To protect itself, Sturnus acquires Device Administrator privileges, making it difficult for users to remove it. If a user attempts to access the settings page to disable these permissions, the malware detects the action and swiftly diverts the user away from the screen. It also monitors various factors, including battery state, SIM changes, developer mode, and network conditions, to adapt its behavior accordingly. All collected data is sent back to the command-and-control server through a combination of WebSocket and HTTP channels, secured with RSA and AES encryption.

When it comes to financial theft, Sturnus has several methods at its disposal. It can collect credentials through overlays, keylogging, UI-tree monitoring, and direct text injection. In some cases, it can even obscure the user’s screen with a full-screen overlay while the attacker executes fraudulent transactions in the background. As a result, users remain unaware of any illicit activity until it is too late.

To safeguard against threats like Sturnus, users can take several practical steps. First, avoid downloading APKs from forwarded links, dubious websites, Telegram groups, or third-party app stores. Banking malware often spreads through sideloaded installers disguised as updates, coupons, or new features. If an app is not available in the Google Play Store, verify the developer’s official website, check provided hashes, and read recent reviews to ensure the app has not been compromised.

Many dangerous malware variants rely on accessibility permissions, which grant full visibility into the user’s screen and interactions. Device administrator rights are even more powerful, as they can prevent removal. If a seemingly harmless utility app suddenly requests these permissions, users should exercise caution and refrain from granting them. Such permissions should only be granted to trusted applications, such as password managers or accessibility tools.

Installing system updates promptly is crucial, as many Android banking trojans target older devices lacking the latest security patches. Users with devices that no longer receive updates are at a heightened risk, particularly when using financial applications. Additionally, avoid sideloading custom ROMs unless users are confident in how they handle security patches and Google Play Protect.

Android devices come equipped with Google Play Protect, which detects a significant portion of known malware families and alerts users when apps behave suspiciously. For enhanced security and control, users may consider opting for a third-party antivirus application. These tools can notify users when an app attempts to log their screen or take control of their device.

To further protect personal information, users should install robust antivirus software on all their devices. This software can alert users to phishing emails and ransomware scams, helping to safeguard personal data and digital assets.

Many malware campaigns rely on data brokers, leaked databases, and scraped profiles to compile lists of potential targets. If personal information such as phone numbers, email addresses, or social media handles are available on various broker sites, attackers can more easily reach individuals with malware links or tailored scams. Utilizing a personal data removal service can help mitigate this risk by removing personal information from data broker listings.

While no service can guarantee complete removal of personal data from the internet, a data removal service is a prudent choice. These services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and effectively reducing the risk of scammers cross-referencing data from breaches with information found on the dark web.

As Sturnus continues to develop, it stands out for the level of control it offers attackers. It bypasses encrypted messaging, steals banking credentials through multiple methods, and maintains a strong grip on infected devices via administrator privileges and constant environmental checks. Although current campaigns may be limited, the sophistication of Sturnus suggests it is being refined for broader operations. If it achieves widespread distribution, it could become one of the most damaging Android banking trojans in circulation.

For more information on cybersecurity threats and protective measures, visit Cyberguy.com.

Android Sound Notifications Enhance User Awareness of Important Alerts

Android’s new Sound Notifications feature helps users stay aware of important sounds, such as smoke alarms and doorbells, even while wearing headphones.

Staying aware of your surroundings is crucial, especially when it comes to hearing important alerts like smoke alarms, appliance beeps, or a knock at the door. However, in our busy lives, it’s easy to miss these sounds, particularly when wearing headphones or focusing on a task. This is where Android’s Sound Notifications feature comes into play.

Designed primarily to assist individuals who are hard of hearing, Sound Notifications is a built-in accessibility feature that listens for specific sounds and sends alerts directly to your screen. Think of it as a gentle tap on the shoulder, notifying you when something important occurs.

While this feature is particularly beneficial for those with hearing impairments, it is also useful for anyone who frequently uses noise-canceling headphones or tends to miss alerts at home. The ability to stay informed without constant vigilance can significantly enhance your daily routine.

Sound Notifications utilize your phone’s microphone to detect key sounds in your environment. When it identifies a sound, it sends a visual alert, which may include a pop-up notification, a vibration, or even a camera flash. This feature can detect a variety of sounds, including smoke alarms, doorbells, and baby cries, making it practical for both home and work settings.

One of the standout aspects of Sound Notifications is the level of control it offers users. You can customize which sounds you want to be alerted to, ensuring that you only receive notifications for the sounds that matter most to you. This flexibility allows you to maintain focus on your tasks while still being aware of your surroundings.

Getting started with Sound Notifications is a straightforward process. For those using a Samsung Galaxy S24 Ultra running the latest version of Android, the setup involves selecting a shortcut to enable the feature. Once activated, your phone will listen for the selected sounds in the background.

If you do not see the Sound Notifications option, you may need to install the Live Transcribe & Notifications app from the Google Play Store. This app allows you to enable Sound Notifications and customize your sound alerts further.

Once activated, your phone will keep a log of detected sounds, which can be particularly useful if you were away from your device and want to review what alerts you may have missed. Additionally, you can save and name sounds, making it easier to differentiate between various alerts, such as the sound of your washer finishing or your microwave timer going off.

Android also allows users to train the Sound Notifications feature to recognize unique sounds specific to their environment. For instance, if your garage door has a distinct tone or an appliance emits a nonstandard beep, you can record that sound. The phone will then listen for it in the future, enhancing the feature’s utility.

By default, Sound Notifications utilize vibration and camera flashes for alerts, which can be adjusted based on the importance of the sound. This customization ensures that you receive the right level of attention for each notification, allowing you to prioritize what matters most.

Privacy is a significant concern for many users, and it’s important to note that Sound Notifications process audio locally on your device. This means that sounds are not sent to Google or any external servers, ensuring that your data remains secure. The only exception is if you choose to include audio with feedback, which is entirely optional.

In summary, Android’s Sound Notifications feature addresses a real need for awareness in our increasingly distracting environments. The setup is quick, the controls are flexible, and your privacy is maintained throughout the process. Once you enable this feature, you may find yourself wondering how you managed without it.

Have you missed any important sounds recently that your phone could have caught for you? Share your experiences with us at Cyberguy.com.

According to CyberGuy, this feature is a game-changer for anyone looking to enhance their awareness in a busy world.

Google Uses AI to Decode Dolphin Communication

Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate human interaction with these intelligent marine mammals in the future.

Google is embarking on an innovative project that harnesses artificial intelligence (AI) to explore the intricate communication methods of dolphins. The ultimate goal is to enable humans to converse with these intelligent creatures.

Dolphins are celebrated for their remarkable intelligence, emotional depth, and social interactions with humans. For thousands of years, they have fascinated people, and now Google is collaborating with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP), a Florida-based non-profit organization that has dedicated over 40 years to studying and recording dolphin sounds.

The initiative has led to the development of a new AI model named DolphinGemma. This model aims to decode the complex sounds dolphins use to communicate with one another. WDP has long correlated specific sound types with behavioral contexts. For example, signature whistles are commonly used by mothers and their calves to reunite, while burst pulse “squawks” tend to occur during confrontations among dolphins. Additionally, “click” sounds are frequently observed during courtship or when dolphins are chasing sharks.

Using the extensive data collected by WDP, Google has built DolphinGemma, which is based on its own lightweight AI model known as Gemma. DolphinGemma is designed to analyze a vast library of dolphin recordings, identifying patterns, structures, and potential meanings behind the vocalizations.

Over time, DolphinGemma aims to categorize dolphin sounds similarly to how humans use words, sentences, or expressions in language. By recognizing recurring sound patterns and sequences, the model can assist researchers in uncovering hidden structures and meanings within the dolphins’ natural communication—a task that previously required significant human effort.

According to a blog post from Google, “Eventually, these patterns, augmented with synthetic sounds created by the researchers to refer to objects with which the dolphins like to play, may establish a shared vocabulary with the dolphins for interactive communication.”

DolphinGemma utilizes audio recording technology from Google’s Pixel phones, which allows for high-quality sound recordings of dolphin vocalizations. This technology can effectively filter out background noise, such as waves, boat engines, or underwater static, ensuring that the AI model receives clean audio data. Researchers emphasize that clear recordings are essential, as noisy data could hinder the AI’s ability to learn.

Google plans to release DolphinGemma as an open model this summer, enabling researchers worldwide to utilize and adapt it for their own studies. While the model has been trained primarily on Atlantic spotted dolphins, it has the potential to be fine-tuned for studying other species, such as bottlenose or spinner dolphins.

In the words of Google, “By providing tools like DolphinGemma, we hope to give researchers worldwide the tools to mine their own acoustic datasets, accelerate the search for patterns, and collectively deepen our understanding of these intelligent marine mammals.”

This groundbreaking project represents a significant step toward bridging the communication gap between humans and dolphins, opening new avenues for research and interaction with these fascinating creatures.

According to Google, the development of DolphinGemma could revolutionize our understanding of dolphin communication and enhance our ability to connect with them.

China Introduces Humanoid Robots for 24/7 Border Surveillance

China has officially deployed humanoid robots at its border crossings, marking a significant advancement in automated surveillance and logistics operations.

China has taken a decisive step toward automating border management by deploying humanoid robots for continuous surveillance, inspections, and logistics at its border crossings. This initiative, which highlights the rapid integration of artificial intelligence and robotics into state infrastructure, involves a contract worth 264 million yuan (approximately $37 million) awarded to UBTech Robotics. The rollout of these robots is scheduled to commence in December at border checkpoints in Fangchenggang, located in the Guangxi region adjacent to Vietnam.

According to UBTech, the humanoid robots will manage the “flow of personnel,” assist with inspections, and handle logistics operations at border facilities. Initially, these robots will perform support tasks under human supervision. However, officials and industry observers note that this deployment signifies a major shift toward continuous, automated border operations.

“Humanoid robots allow for persistent operation in complex and remote environments,” the company stated. “They can reduce human workload while improving efficiency and consistency in high-demand areas such as border crossings.”

The introduction of humanoid robots patrolling borders may seem like a concept from science fiction, but it is becoming a reality in China. Unlike human guards, robots do not require rest, shelter, or food—factors that are critical at remote border posts where logistics can be challenging. The Walker S2, the model being deployed, is equipped with a self-replaceable battery system that allows it to swap out depleted batteries independently in about three minutes, facilitating near-continuous operation.

This capability significantly lowers long-term operational costs. “Energy autonomy changes the entire maintenance model,” noted one robotics industry analyst. “Instead of constant supervision, you move toward planned maintenance cycles, which is far more efficient for large-scale deployments.”

For the time being, UBTech states that the robots will focus on support and inspection-related duties at the China-Vietnam border, with human operators retaining decision-making authority, often through remote control systems.

China’s exploration of robotic technology in border and customs management is not entirely new. Humanoid robots have previously been deployed at customs checkpoints and airports across the country, assisting travelers and monitoring facilities. However, the Fangchenggang deployment is notable for its scale and permanence, as well as the transition to a 24/7 robotic presence in an active border environment.

This expansion has also increased demand for vendor-independent fleet management software, which can handle programming, teleoperation, and compliance reporting across various robot models. Such systems enable human supervisors to oversee multiple robots simultaneously, even from distant command centers.

“Safety checks can now be carried out more clearly, with humans in charge—even if that control is remote,” UBTech stated.

The Walker S2 humanoid robot is designed to closely mimic human proportions and movement, making it particularly suited for environments built for people. Standing at 176 centimeters tall and weighing 70 kilograms, it can walk at speeds of up to 2 meters per second, roughly equivalent to a brisk human pace.

Its design features a flexible waist with rotation and angle ranges similar to a human’s, ambidextrous hands capable of carrying up to 7.5 kilograms, and high-precision sensors in each hand for delicate tasks. Additionally, the robot is equipped with microphones and speakers, allowing for basic verbal interactions.

Constructed from composite materials and aeronautical-grade aluminum alloy, with a 3D-printed main casing, the Walker S2 is engineered for durability in demanding environments. UBTech emphasizes that the robot’s humanoid form allows it to operate existing infrastructure—such as doors, tools, and checkpoints—without necessitating major redesigns.

While the Fangchenggang deployment is officially described as a pilot program, UBTech’s ambitions extend beyond the border. In a recent press release, the company announced plans to begin mass production and large-scale shipping of its industrial humanoid robots, citing a surge in orders throughout 2025.

“This is a strong signal that humanoid robots are moving from experimental showcases to real-world applications,” the company stated. Shareholders appear to agree, as UBTech has framed the project as a milestone in the commercialization of humanoid robotics.

Industry experts suggest that border crossings are a logical testing ground for robotic technology. “Borders are dynamic, noisy, exposed to weather, and require constant vigilance,” said one robotics researcher. “They are exactly the kind of environment where robots can complement or gradually replace human labor.”

For now, China insists that humans remain in control, with robots serving as force multipliers rather than autonomous enforcers. However, analysts suggest that as AI decision-making capabilities improve, humanoid robots may be entrusted with increasingly independent responsibilities.

The Fangchenggang deployment underscores a broader trend: nations are beginning to “hire” machines for roles once thought inseparable from human judgment. Whether in logistics, surveillance, or security, humanoid robots are steadily transitioning from novelty to necessity.

As one observer remarked, “What we’re seeing at China’s borders today may soon become standard practice elsewhere—a future where the first line of contact is no longer human, but humanoid,” according to Global Net News.

Netflix Suspension Scam Targets Users Through Phishing Emails

As the holiday season approaches, Netflix phishing scams are on the rise, with scammers targeting unsuspecting users through convincing fake emails.

The Christmas season often brings an increase in phishing scams, particularly those aimed at Netflix users. These scams typically manifest as fake emails that attempt to trick recipients into providing personal information. One such case involved a user named Stacey P., who received a suspicious email that appeared to be from Netflix.

Stacey’s experience highlights how realistic these phishing attempts can seem, especially during the busy holiday shopping season. With many people juggling subscriptions, gifts, and billing changes, a fake alert can easily catch someone off guard. Stacey took the precaution of verifying the email before taking any action, which ultimately saved him from falling victim to the scam.

At first glance, the Netflix suspension email looked polished and official. However, a closer examination revealed several red flags that indicated it was fraudulent. For instance, the email contained glaring grammatical errors, such as “valldate” instead of “validate” and “Communicication” instead of “communication.” Additionally, the message addressed the recipient as “Dear User,” rather than using their actual name, which is a standard practice in legitimate communications from Netflix.

The email claimed that the user’s billing information had failed and warned that their membership would be suspended within 48 hours unless they took immediate action. Scammers often create a sense of urgency to prevent individuals from thinking critically about the situation. The email featured a bold red “Restart Membership” button, designed to lure users into entering their credentials on a phishing page. Once a user inputs their password and payment details, those sensitive pieces of information are handed directly to the attackers.

Another notable detail in the email was the footer, which included odd wording about inbox preferences and a Scottsdale address that is not associated with Netflix. Legitimate subscription services typically maintain consistent company details across their communications.

To protect oneself from such phishing attempts, there are several best practices to follow. First, it is advisable to access Netflix directly through a browser or app instead of clicking any links in suspicious emails. This ensures that users are viewing their actual account status, which is always accurate on the official site.

Phishing pages often mimic real websites, making it crucial to type the official URL directly into the browser. This method keeps users in control and helps them avoid fake pages. Additionally, scammers frequently gather email addresses and personal information from data broker sites, which fuels subscription scams like the one Stacey encountered. Utilizing a trusted data removal service can help minimize the amount of personal information available online, thereby reducing the risk of future phishing attempts.

While no service can guarantee complete removal of personal data from the internet, a reputable data removal service can actively monitor and systematically erase personal information from numerous websites. This proactive approach not only provides peace of mind but also significantly reduces the likelihood of being targeted by scammers.

When using a computer, hovering over a link can reveal its true destination. If the address appears suspicious, it is best to delete the message. Users are also encouraged to forward any dubious Netflix emails to phishing@netflix.com, which helps the fraud team block similar messages in the future.

Implementing two-factor authentication (2FA) for email accounts and installing robust antivirus software can further protect against malicious pages. Strong antivirus solutions can alert users to phishing emails and ransomware scams, safeguarding personal information and digital assets.

If a user inadvertently enters their billing information on a fake login page, attackers can exploit that data for various malicious purposes, including identity theft. Identity theft protection services can monitor personal information, such as Social Security numbers and email addresses, alerting users if their data is being sold on the dark web or used to open unauthorized accounts. These services can also assist in freezing bank and credit card accounts to prevent further unauthorized use.

Stacey’s vigilance prevented him from becoming yet another victim of this email scam. As phishing attempts become increasingly sophisticated, recognizing the warning signs and following the recommended precautions can save individuals time, money, and frustration.

Have you encountered a fake subscription alert that nearly deceived you? Share your experiences by reaching out to us at Cyberguy.com.

According to CyberGuy.com, staying informed and cautious is the best defense against phishing scams during the holiday season.

Soviet-Era Spacecraft Returns to Earth After 53 Years in Orbit

Soviet spacecraft Kosmos 482 reentered Earth’s atmosphere on Saturday after 53 years in orbit following a failed attempt to launch toward Venus.

A Soviet-era spacecraft, Kosmos 482, made an uncontrolled reentry into Earth’s atmosphere on Saturday, marking the end of its 53-year journey in orbit. The spacecraft was originally launched in 1972 as part of a series of missions aimed at exploring Venus, but it never escaped Earth’s gravitational pull due to a rocket malfunction.

The European Union Space Surveillance and Tracking confirmed the spacecraft’s reentry, noting that it had failed to appear on subsequent orbits, which indicated its descent. The European Space Agency’s space debris office also reported that Kosmos 482 had reentered after it was not detected by a radar station in Germany.

Details regarding the exact location and condition of the spacecraft upon reentry remain unclear. Experts had anticipated that some, if not all, of the half-ton spacecraft might survive the fiery descent, as it was designed to endure the harsh conditions of a landing on Venus, the hottest planet in our solar system.

Despite the potential for debris to reach the ground, scientists emphasized that the likelihood of anyone being harmed by falling spacecraft debris was exceedingly low. The spherical lander of Kosmos 482, measuring approximately 3 feet (1 meter) in diameter and encased in titanium, weighed over 1,000 pounds (495 kilograms).

After its launch, much of the spacecraft had already fallen back to Earth within a decade. However, the lander remained in orbit until its recent reentry, as it could no longer resist the pull of gravity due to its deteriorating orbit.

As the spacecraft spiraled downward, scientists and military experts were unable to predict precisely when or where it would land. The uncertainty was compounded by solar activity and the spacecraft’s condition after more than five decades in space.

As of Saturday morning, the U.S. Space Command had not yet confirmed the spacecraft’s demise, as it continued to collect and analyze data from orbit. The U.S. Space Command routinely monitors dozens of reentries each month, but Kosmos 482 garnered additional attention from both government and private space trackers due to its potential to survive reentry.

Unlike many other pieces of space debris, Kosmos 482 was coming in uncontrolled, without any intervention from flight controllers. Typically, such controllers aim to direct old satellites and debris toward vast expanses of water, such as the Pacific Ocean, to minimize risks to populated areas.

The reentry of Kosmos 482 serves as a reminder of the long-lasting impact of space missions from the Soviet era and the ongoing challenges of tracking and managing space debris. As space exploration continues to evolve, the legacy of these early missions remains a topic of interest for scientists and space enthusiasts alike.

According to Fox News, the reentry of Kosmos 482 highlights the complexities and risks associated with aging spacecraft and the importance of monitoring space debris in our increasingly crowded orbital environment.

Starbucks Appoints Indian-American Anand Varadarajan as Chief Technology Officer

Starbucks has appointed Anand Varadarajan, a veteran of Amazon, as its new chief technology officer, effective January 19, 2026.

Starbucks announced on Friday that it has appointed Anand Varadarajan as its new chief technology officer (CTO). Varadarajan, who spent nearly 19 years at Amazon, most recently led technology and supply chain operations for the tech giant’s worldwide grocery stores business.

In a memo announcing the hiring, Starbucks CEO Brian Niccol praised Varadarajan’s expertise, stating, “He knows how to create systems that are reliable and secure, drive operational excellence, and scale solutions that keep customers at the center. Just as important, he cares deeply about supporting and developing the people behind the scenes that build and enable the technology we use.”

Varadarajan will officially begin his role on January 19, 2026, and will also serve as executive vice president. He takes over from Deb Hall Lefevre, the former CTO, who departed in September amid a $1 billion restructuring plan that included a second round of layoffs.

With a strong educational background, Varadarajan is an alumnus of the Indian Institute of Technology (IIT) and holds a master’s degree in civil engineering from Purdue University, as well as a master’s degree in computer science from the University of Washington.

During his tenure at Amazon, Varadarajan was recently elevated to oversee the worldwide grocery technology and supply chain organizations, which encompass both the company’s Fresh brand and Whole Foods. He reported directly to Jason Buechel, Amazon’s grocery chief and the CEO of Whole Foods.

At Amazon, Varadarajan was instrumental in implementing grocery technology innovations, including a pilot program that introduced mini robotic warehouses in Whole Foods supermarkets. This initiative enabled consumers to shop from both the in-store selection and products from Amazon’s broader inventory, which are not typically available at the organic grocer.

Starbucks is currently navigating a significant turnaround strategy under Niccol, who took over as CEO in September 2024. The company recently reported that its quarterly same-store sales returned to growth for the first time in nearly two years, according to CNBC. Additionally, holiday sales have shown strong performance this season, despite ongoing strikes by baristas.

A key component of Starbucks’ turnaround strategy is its hospitality platform, Green Apron Service, which represents the company’s largest investment in labor at $500 million. This program is designed to ensure proper staffing and enhance technology to maintain fast service times. It was developed in response to the growth in digital orders, which now account for more than 30% of sales, as well as feedback from baristas.

In a related development, Starbucks recently announced it would pay $35 million to more than 15,000 workers in New York City to settle claims that it denied them stable schedules and arbitrarily reduced their hours. This settlement comes amid a continuing strike by Starbucks’ union, which began last month in various locations across the U.S. This marks the third strike to impact the chain since the union was established four years ago.

As Starbucks moves forward with its strategic initiatives, Varadarajan’s extensive experience in technology and supply chain management is expected to play a crucial role in the company’s efforts to enhance operational efficiency and customer satisfaction.

According to CNBC, the company is focused on leveraging technology to improve service and address the challenges posed by labor disputes.

Meta’s AI Hire Alexander Wang Faces Tensions with Mark Zuckerberg

Meta’s ambitious AI expansion faces internal challenges as tensions rise between CEO Mark Zuckerberg and newly appointed AI leader Alexander Wang.

Meta has embarked on a significant push into artificial intelligence, investing billions of dollars to expand its capabilities. However, recent reports suggest that the company’s AI division is experiencing friction between its leadership and CEO Mark Zuckerberg’s management style.

In a bid to enhance its AI efforts, Meta recruited young tech prodigy Alexander Wang to lead the company’s AI division. Despite the high expectations surrounding his appointment, it appears that Wang and Zuckerberg are struggling to find common ground. Reports indicate that Wang has expressed concerns to associates about Zuckerberg’s micromanagement approach, which he perceives as “suffocating.”

According to a report by the Financial Times, Wang has voiced his frustrations regarding Zuckerberg’s tight control over the AI initiative, claiming it is hindering progress. This internal discord highlights the challenges that can arise when a visionary leader’s ambitions clash with a more centralized management style.

Wang, an accomplished American tech entrepreneur, is best known for founding Scale AI, a company that provides annotated data essential for training machine-learning models. His early talent in mathematics and computing led him to briefly attend the Massachusetts Institute of Technology (MIT) before he dropped out in 2016 to focus on Scale AI full-time. Under his leadership, the startup quickly became a vital player in the AI ecosystem, collaborating with major tech firms such as Nvidia, Amazon, and Meta itself. By 2024, Scale AI had achieved a valuation nearing $14 billion, positioning Wang as one of the youngest self-made billionaires in the AI sector.

In June 2025, Zuckerberg made a bold strategic move by investing approximately $14.3 billion in Scale AI and bringing Wang on board to lead a new division dedicated to superintelligence. This decision was part of Meta’s efforts to revitalize its AI ambitions amid increasing competition from rivals like OpenAI and Google. Wang’s responsibilities include overseeing Meta’s entire AI operation, encompassing research, product development, and infrastructure teams within the superintelligence initiative.

However, Wang’s dissatisfaction is emblematic of broader internal challenges at Meta. The company has faced a series of layoffs, senior executive departures, and rushed AI rollouts, all of which have contributed to a decline in employee morale and heightened investor anxiety. Meta’s ambitious AI expansion underscores the company’s determination to remain competitive in a rapidly evolving tech landscape, yet it also reveals the complexities that accompany such aggressive growth.

The tension between Wang’s innovative vision and Zuckerberg’s management practices reflects a common theme in fast-moving tech companies: attracting top talent and investing substantial resources does not guarantee seamless execution or alignment at the leadership level. The friction between Wang and existing management highlights the difficulties of integrating high-profile hires into established corporate cultures, especially when rapid decision-making and centralized control conflict with the autonomy expected by AI innovators.

Beyond individual personalities, these developments point to systemic pressures within Meta. The combination of accelerated timelines, significant financial commitments, and intense public scrutiny creates an environment ripe for conflict, as reported by sources familiar with the situation. When organizational cohesion is strained, investor concerns, employee morale, and operational efficiency can all be adversely affected.

As Meta navigates these challenges, its ability to convert financial and technological investments into sustained innovation may hinge less on capital alone and more on fostering collaborative leadership, clear communication, and adaptable management structures. The outcome of this internal struggle could significantly impact Meta’s future in the competitive AI landscape.

According to Financial Times, the ongoing tensions between Wang and Zuckerberg could have lasting implications for Meta’s ambitious AI goals.

ChatGPT Mobile Spending Surpasses $3 Billion Worldwide

ChatGPT’s mobile app has surpassed $3 billion in global consumer spending, reflecting rapid adoption of AI technology and a strong subscription model since its launch in May 2023.

OpenAI’s ChatGPT mobile app has achieved a significant milestone, crossing $3 billion in global consumer spending. This figure highlights the rapid adoption of artificial intelligence and the effectiveness of subscription-driven growth.

As of this week, the ChatGPT mobile app has surpassed $3 billion in worldwide consumer spending on both iOS and Android platforms since its launch in May 2023. According to estimates from app intelligence provider Appfigures, a substantial portion of this growth—approximately $2.48 billion—occurred in 2025 alone. This marks a notable increase compared to the $487 million spent in 2024, showcasing the widespread acceptance of AI tools on mobile devices.

The ChatGPT app reached the $3 billion milestone in just 31 months, outpacing other major applications. For instance, TikTok took 58 months to reach a similar figure, while streaming services like Disney+ and HBO Max required 42 and 46 months, respectively. This rapid adoption underscores ChatGPT’s unique position in the mobile app market.

A significant portion of the spending is attributed to paid subscription tiers, such as ChatGPT Plus and ChatGPT Pro, which provide users with access to advanced features and the latest AI models. The app’s visibility in mobile app rankings has also increased, reflecting a growing consumer willingness to invest in AI-powered services. This achievement establishes ChatGPT as one of the most rapidly monetized AI applications in mobile history.

The $3 billion figure encompasses total spending on iOS and Android devices since the app’s initial launch. When it first debuted in May 2023, it was available exclusively on iOS.

ChatGPT is an AI language model developed by OpenAI that can comprehend and generate human-like text based on user prompts. It employs advanced machine learning techniques to perform a variety of tasks, including answering questions, writing content, translating languages, summarizing text, and assisting with coding.

The model has been integrated into various platforms, encompassing both web and mobile applications. It offers users free access alongside paid subscription options that provide enhanced capabilities. As a result, ChatGPT has rapidly emerged as one of the most widely utilized AI tools, reflecting the increasing demand for conversational AI across sectors such as education, business, entertainment, and everyday problem-solving.

The swift rise of the ChatGPT mobile app signifies a broader shift in consumer engagement with artificial intelligence, indicating a growing comfort with incorporating AI tools into daily life. Beyond impressive revenue figures, its success illustrates a larger trend toward mainstream adoption of AI-powered applications, where users increasingly recognize the value of conversational AI for productivity, creativity, and problem-solving.

This milestone also highlights the effectiveness of a subscription-based model for monetizing advanced AI services, demonstrating users’ willingness to invest in tools that enhance efficiency and provide innovative capabilities.

The app’s accelerated adoption compared to other major platforms reflects evolving expectations among mobile users and the distinct appeal of AI-driven experiences that deliver immediate, tangible benefits. Furthermore, this growth suggests a potential expansion of AI across various sectors, from education and entertainment to professional workflows, as accessibility and user familiarity continue to improve.

According to Appfigures, the success of ChatGPT’s mobile app is a testament to the increasing integration of AI into everyday life.

AAPI Global Health Summit 2026 Advances Medical Innovation, Global Partnerships, and Community Impact in Odisha

The American Association of Physicians of Indian Origin (AAPI) is proud to announce that the AAPI Global Health Summit (GHS) 2026 will be held from January 9–11, 2026, in Bhubaneswar, Odisha, in collaboration with the Kalinga Institute of Medical Sciences (KIMS), KIIT University, and leading healthcare institutions across the nation.

Bringing together hundreds of physicians, medical educators, researchers, and public health leaders from the United States and India, GHS 2026 will serve as a premier platform for advancing clinical excellence, strengthening global health partnerships, and expanding community‑focused initiatives across India.

AAPI President Dr. Amit Chakrabarty emphasized the significance of the upcoming summit, stating, “GHS 2026 will showcase the very best of Indo‑U.S. medical collaboration. Our goal is to share knowledge, build capacity, and create sustainable health solutions that benefit communities across India.”

GHS New Logo

A Transformative Three‑Day Summit

The 2026 Summit will feature a robust lineup of CME sessions, hands‑on workshops, global health panels, surgical demonstrations, community outreach programs, and youth engagement activities. Events will be hosted across KIMS, Mayfair Lagoon, and Swosti Premium, offering participants a dynamic and immersive learning environment.

Key Highlights Include:

✅ Scientific CME Sessions

Covering critical topics such as metabolic syndrome, hemoglobinopathies, cervical cancer, mental health, and healthcare advocacy.

✅ AI in Global Medical Practices Forum

A full‑day program dedicated to artificial intelligence in healthcare, featuring global experts discussing medical superintelligence, AI‑driven diagnostics, radiology innovation, and ethical considerations.

✅ Emergency Medicine & Resuscitation Workshops

Hands‑on training in AHA 2025 guidelines, NELS protocols, cardiac arrest management, and advanced simulation using SimMan3G Plus.

✅ Specialized Tracks

Including TB elimination strategies, diabetes and obesity management, Ayurveda CME, IMG professional development, and ER‑to‑ICU rapid‑response training.

✅ Women in Healthcare Leadership Forum

A dedicated platform highlighting the contributions and leadership pathways of women physicians in India and the U.S.

✅ Youth & Community Programs

Mass CPR training, HPV vaccination drives, stem cell donor registration, and child welfare initiatives.

Dr Rabi Samantanoted, “The Global Health Summit is not just a conference—it is a mission. GHS 2026 will empower clinicians with the tools, technology, and global perspectives needed to transform patient care.”

Dr Amit Chkrabarty

Strengthening Indo‑U.S. Healthcare Collaboration

For nearly two decades, AAPI’s Global Health Summits have played a pivotal role in advancing medical education, fostering research partnerships, and supporting public health initiatives across India.

Dr Sita Kanta Dash, while describing the GHS 2026 initiatives said, “GHS 2026 will continue this legacy with an expanded focus on the following:

  • Technology‑driven healthcare innovation
  • Capacity building for medical students and residents
  • Community‑centered preventive health programs
  • Collaborative research between U.S. and Indian institutions.”

AAPI Vice President Dr. Meher Medavaram highlighted the summit’s broader impact, saying, “Our work extends far beyond CMEs. GHS 2026 will strengthen communities, support youth, and build bridges between healthcare systems that share a common purpose.”

Leadership at the Helm

GHS 2026 is guided by a distinguished group of leaders from AAPI and partner institutions in India:

AAPI National Leadership

  • Dr. Amit Chakrabarty, President, AAPI & Chairman, GHS
  • Dr. Meher Medavaram, President‑Elect
  • Dr. Krishna Kumar, Vice President
  • Dr. Satheesh Kathula, Immediate Past President
  • Dr. Mukesh Lathia, Souvenir Chair
  • Dr. Tarak Vasavada, CME Chair
  • Dr. Kalpalatha Guntupalli, Women’s Forum Coordinator
  • Dr. Atasu Nayak, President, Odisha Physicians of America
  • Dr. Vemuri S. Murthy, CME Coordinator

Kalinga & KIMS Leadership (India)

  • Dr. Achyuta Samanta, Hon. Founder, KIIT, KISS & KIMS – Chief Patron
  • Dr. Sita Kantha Dash, Chairman, Kalinga Hospital Ltd
  • Dr. S. Santosh Kumar Dora, CEO, Kalinga Hospital Ltd
  • Dr. Rabi N. Samanta, Advisor to Hon’ble Founder, KIIT, KISS & KIMS
  • Dr. Ajit K. Mohanty, Director General, KIMS

AAPI Liaisons – India

  • Prof. Suchitra Dash, Principal & Dean, MKCG Medical College
  • Dr. Uma Mishra, Advisor
  • Dr. Bharati Mishra, Retd. Prof & HOD, ObGyn
  • Dr. Abhishek Kashyap, Founder, GAIMS
  • Er. Prafulla Kumar Nanda, Coordinator
  • Mrs. Nandita Bandyopadhyaya, Hospitality
  • Mr. Nishant Koli, Promotions
  • Mr. Dilip Panda, Promotions

AAPI Event Coordinators

  • Dr. Anjali Gulati
  • Mrs. Vijaya Mulpur
  • Mrs. Sonchita Chakrabarty
  • Dr. Tapti Panda

Dr. Chakrabarty praised the collaborative leadership, noting, “The strength of GHS lies in the collective expertise of our leaders across the U.S. and India. Their commitment ensures that this summit will deliver meaningful, lasting impact.”

AAPI’s Vision for 2026 and Beyond

As AAPI prepares to welcome delegates to Odisha, the organization reaffirms its commitment to improving healthcare delivery, expanding access to quality care, and nurturing the next generation of medical leaders.

Dr. Chakrabarty added, “GHS 2026 is an invitation—to learn, to collaborate, and to lead. Together, we will shape a healthier future for India and the world. We will ensure that GHS 2026 is one of the best events in the recent history of AAPI. We are collaborating with all possible channels of communication to ensure maximum participation from all the physicians of Odisha.  I assure you that this is going to be a grand project.” Please watch the Interview by Dr. Amit Chakrabarty on GHS 2026 at: https://youtu.be/wG6WZbyw-zE?si=Nz_l45qplMpYp5le

For more details, please visit: www.aapiusa.org

Data Breach Exposes Personal Information of 400,000 Bank Customers

A significant data breach involving fintech firm Marquis has compromised the personal information of over 400,000 bank customers, with Texas being the most affected state.

A major data breach linked to the U.S. fintech firm Marquis has exposed the sensitive information of more than 400,000 individuals across multiple states. The breach was facilitated by hackers who exploited an unpatched vulnerability in a SonicWall firewall, leading to unauthorized access to consumer data. Texas has been particularly hard hit, with over 354,000 residents affected, and this number may continue to rise as additional notifications are issued.

Marquis serves as a marketing and compliance provider for financial institutions, working with over 700 banks and credit unions nationwide. This role grants the company access to centralized pools of customer data, making it a prime target for cybercriminals.

According to legally mandated disclosures filed in Texas, Maine, Iowa, Massachusetts, and New Hampshire, the hackers accessed a wide array of personal and financial information. The stolen data includes customer names, dates of birth, postal addresses, Social Security numbers, and bank account, debit, and credit card numbers. The breach reportedly dates back to August 14, when the attackers gained access through the SonicWall vulnerability. Marquis later confirmed that the incident was a ransomware attack.

While Marquis has not publicly identified the attackers, the breach has been widely associated with the Akira ransomware gang, known for targeting organizations using SonicWall appliances during large-scale exploitation waves. This incident is not merely a routine credential leak; it poses significant risks to affected individuals.

In a statement to CyberGuy, a spokesperson for Marquis said, “In August, Marquis Marketing Services experienced a data security incident. Upon discovery, we immediately enacted our response protocols and proactively took the affected systems offline to protect our data and our customers’ information. We engaged leading third-party cybersecurity experts to conduct a comprehensive investigation and notified law enforcement.” The spokesperson emphasized that while unauthorized access occurred, there is currently no evidence suggesting that personal information has been used for identity theft or financial fraud.

Ricardo Amper, CEO and Founder of Incode Technologies, a digital identity verification company, highlighted the long-term dangers of identity breaches. Unlike a stolen password, core identity data such as Social Security numbers and birth dates cannot be changed, meaning the risk of misuse can persist for years. “With a typical credential leak, you reset passwords, rotate tokens and move on,” Amper explained. “But core identity data is static. Once exposed, it can circulate on criminal markets for years.” This makes identity breaches particularly hazardous, as criminals can reuse stolen data to open new accounts, create fake identities, or execute targeted scams.

The breach also raises concerns about account takeover and new account fraud. With sufficient personal details, attackers can bypass security checks, reset passwords, and change account information, often in ways that appear legitimate. Synthetic identity fraud is another growing threat, where real data is combined with fabricated details to create new identities that can later be exploited.

Ransomware groups like Akira are increasingly targeting widely deployed infrastructure to maximize their impact. When a firewall is compromised, everything behind it becomes vulnerable. “What we’re seeing with groups like Akira is a focus on maximizing impact by targeting widely used infrastructure,” Amper noted. This strategy exposes a significant blind spot in traditional cybersecurity practices, as many organizations still assume that traffic passing through a firewall is safe.

Identity data does not expire; Social Security numbers and birth dates remain constant throughout a person’s life. Amper emphasized that when such data reaches criminal markets, the associated risks do not diminish quickly. “Fraud rings treat stolen identity data like inventory. They hold it, bundle it, resell it, and combine it with information from new breaches,” he said.

Victims of identity breaches often experience a lasting erosion of trust. Amper pointed out that the psychological toll of knowing that one can no longer trust who is contacting them can be significant. “The most damaging fraud often starts long after the breach is no longer in the news,” he added.

In light of the Marquis breach, experts recommend several protective measures. A credit freeze can prevent criminals from opening new accounts in your name using stolen identity data. This is particularly crucial after a breach where full identity profiles have been exposed. A fraud alert can also be placed to instruct lenders to take extra steps to verify your identity before approving credit.

Additionally, turning on alerts for withdrawals, purchases, login attempts, and password changes across all financial accounts can help catch unauthorized activity early. Regularly checking statements and credit reports is essential, as identity data from breaches can be reused for delayed fraud.

Implementing strong two-factor authentication methods, such as app-based or hardware-backed options, can further enhance security. Biometric authentication tied to physical devices also adds a layer of protection against account takeovers driven by stolen identity data.

As data brokers continue to collect and resell personal information, utilizing a data removal service can help reduce the amount of personal information publicly available, thereby lowering exposure to potential fraud. While no service can guarantee complete removal of data from the internet, these services actively monitor and erase personal information from numerous websites.

In summary, the Marquis data breach underscores the critical need for robust cybersecurity measures, particularly in the financial sector. As the fallout from this incident continues, individuals must remain vigilant in protecting their identities and personal information.

For further information on protecting your identity after a major data breach, you can refer to CyberGuy.

Global Malayalee Festival to Launch Wayanad AI and Data Center Project

The inaugural Global Malayalee Festival in Kochi will unveil plans for the Wayanad AI and Data Center Park, aiming to position Kerala as a leader in technology and innovation.

Kochi: The inaugural Global Malayalee Festival, taking place on January 1 and 2 at the Crowne Plaza Hotel in Kochi, promises to be a landmark event for the global Malayalee community. This festival, organized by the Malayalee Festival Federation, a not-for-profit organization registered as an NGO, aims to blend cultural celebration with strategic economic initiatives.

Bringing together Malayalees from around the world, the festival seeks to foster cultural unity, business collaboration, and long-term development initiatives for Kerala. A key highlight of the event will be the announcement of a significant public-private partnership project—the proposed Wayanad AI and Data Center Park. This initiative aims to position Kerala as a leading hub for artificial intelligence, data infrastructure, and technological innovation in India.

The Global Malayalee Festival is designed to be inclusive, welcoming participants from all walks of life, including professionals, entrepreneurs, academics, artists, and community leaders. The central event on the evening of January 1 will feature global delegates networking and celebrating the New Year, underscoring the festival’s emphasis on unity and shared identity.

January 2 will be dedicated to the first-ever Global Malayalee Trade and Investment Meet, a full day of structured sessions aimed at connecting Kerala with global business expertise and capital. The morning session will include presentations from prominent business leaders, particularly from Gulf countries, alongside leading Malayalee entrepreneurs. Discussions will focus on investment opportunities in Kerala, emerging global markets, cross-border trade, and the diaspora’s role in strengthening the state’s economy.

The afternoon session will shift focus to artificial intelligence, information technology, and startup ecosystems, reflecting Kerala’s ambitions in the digital economy. Industry experts, technology entrepreneurs, and startup leaders are expected to explore opportunities in AI innovation, data science, and digital infrastructure, highlighting Kerala’s potential as a knowledge and technology hub.

During this session, the Malayalee Festival Federation will formally announce plans for the Wayanad AI and Data Center Park, proposed to be located in South Wayanad, between Kalpetta and Nilambur. This project is envisioned as a comprehensive facility that will combine AI research and development, innovation labs, training and skilling centers, and a modern data center.

“Kerala should be at the forefront of AI development in India,” organizers stated, adding that the proposed park aims to create high-value employment, promote innovation, and attract both domestic and international investment. The federation plans to collaborate with the Kerala state government, the central government, and venture capital partners over the coming year to bring this proposal to fruition.

The evening public session on January 2 will honor 16 distinguished individuals with the Global Malayalee Ratna Awards, recognizing excellence and lifetime contributions across various fields, including business, finance, engineering, science, technology, politics, literature, arts, culture, trade, and community service. Additionally, several other prominent Malayalees will receive special recognition for their personal achievements and sustained contributions to the global Malayalee community.

The festival is expected to attract attendance from Kerala and central ministers, opposition leaders, senior political figures, and special guests from abroad, particularly from the Gulf region, highlighting the growing global footprint of the Malayalee diaspora.

Abdullah Manjeri, Director and Managing Director of the Malayalee Festival Federation, emphasized that the organization’s core mission is the socio-economic development of Kerala by leveraging the expertise, experience, and resources of global Malayalees. “The Global Malayalee Festival is intended to build a lasting network of Malayalees across continents and actively connect them with Kerala’s development journey,” he said. Initiatives like the Wayanad AI and Data Center Park reflect the federation’s commitment to future-oriented growth.

The festival will conclude with a gala dinner and orchestra, merging cultural celebration with a renewed commitment to collaboration and innovation. With its unique blend of culture, commerce, technology, and recognition, the first Global Malayalee Festival is poised to become a recurring platform that not only celebrates Malayalee identity but also channels global expertise toward shaping Kerala’s future, according to Global Net News.

FBI Director Kash Patel Discusses AI Efforts Against Domestic and Global Threats

FBI Director Kash Patel announced the agency’s expansion of artificial intelligence tools to address evolving domestic and global threats in the digital age.

FBI Director Kash Patel revealed on Saturday that the agency is significantly increasing its use of artificial intelligence (AI) to combat both domestic and international threats. In a post on X, Patel emphasized that AI is a “key component” of the FBI’s strategy to stay ahead of “bad actors” in an ever-changing threat landscape.

“The FBI has been working on key technology advances to keep us ahead of the game and respond to an always changing threat environment both domestically and on the world stage,” Patel stated. He highlighted an ongoing AI project designed to assist investigators and analysts in the national security sector, aiming to outpace adversaries who seek to harm the United States.

To ensure that the agency’s technological tools evolve in line with its mission, Patel mentioned the establishment of a “technology working group” led by outgoing Deputy Director Dan Bongino. “These are investments that will pay dividends for America’s national security for decades to come,” he added.

A spokesperson for the FBI confirmed to Fox News Digital that there would be no additional comments beyond Patel’s post on X.

According to the FBI’s website, the agency employs AI in various applications, including vehicle recognition, voice-language identification, speech-to-text analysis, and video analytics. These tools are part of the FBI’s broader strategy to enhance its capabilities in addressing modern threats.

Earlier this week, Dan Bongino announced his resignation from the FBI, effective January. In his post on X, he expressed gratitude to President Donald Trump, Attorney General Pam Bondi, and Director Patel for the opportunity to serve. “Most importantly, I want to thank you, my fellow Americans, for the privilege to serve you. God bless America, and all those who defend Her,” Bongino wrote.

As the FBI continues to adapt to the challenges posed by evolving technology and threats, the integration of AI is expected to play a crucial role in its operations moving forward, according to Fox News.

Google Cloud Partners with Palo Alto Networks in Nearly $10 Billion Deal

Palo Alto Networks will migrate key internal workloads to Google Cloud as part of a nearly $10 billion deal, enhancing their strategic partnership and engineering collaboration.

Palo Alto Networks has announced a significant multibillion-dollar deal with Google Cloud, which will see the migration of key internal workloads to the cloud platform. This partnership, revealed on Friday, marks an expansion of their existing collaboration and aims to deepen their engineering efforts.

As part of this agreement, Palo Alto Networks will utilize Google Gemini’s artificial intelligence models for its copilots and leverage Google Cloud’s Vertex AI platform. This integration reflects a growing trend among enterprises to harness AI while addressing security concerns.

“Every board is asking how to harness AI’s power without exposing the business to new threats,” said BJ Jenkins, president of Palo Alto Networks. “This partnership answers that question.” Matt Renner, chief revenue officer for Google Cloud, echoed this sentiment, stating that “AI has spawned a tremendous amount of demand for security.”

Palo Alto Networks is well-known for its extensive range of cybersecurity products and has already established over 75 joint integrations with Google Cloud. The company has reported $2 billion in sales through the Google Cloud Marketplace, underscoring the success of their collaboration thus far.

The new phase of the partnership will enable Palo Alto Networks customers to protect live AI workloads and data on Google Cloud. It will also facilitate the maintenance of security policies, accelerate Google Cloud adoption, and simplify and unify security solutions across various platforms.

According to a recent press release from Palo Alto Networks, their State of Cloud Report, released in December 2025, indicates that customers are significantly increasing their use of cloud infrastructure to support new AI applications and services. Alarmingly, the report found that 99% of respondents experienced at least one attack on their AI infrastructure in the past year.

This partnership aims to address these pressing security challenges through an enhanced go-to-market strategy. It will focus on building security into every layer of hybrid multicloud infrastructure, every stage of application development, and every endpoint. This approach will allow businesses to innovate with advanced AI technologies while safeguarding their intellectual property and data in the cloud.

The companies plan to deliver end-to-end AI security, which includes a next-generation software firewall driven by AI, an AI-driven secure access service edge (SASE) platform, and a simplified and unified security experience for users.

Both Google and Palo Alto Networks have made substantial investments in security software as enterprises increasingly adopt AI solutions. Notably, Google is in the process of acquiring security firm Wiz for $32 billion, pending regulatory approval.

Palo Alto Networks has also been active in the AI space, launching AI-driven offerings in October and announcing plans to acquire software company Chronosphere for $3.35 billion last month. Renner emphasized that this new deal highlights Google Cloud’s advantageous positioning as AI reshapes the competitive landscape against major rivals like Amazon and Microsoft.

This partnership between Palo Alto Networks and Google Cloud is poised to redefine how organizations approach AI security, ensuring that as they innovate, they do so with robust protections in place.

According to The American Bazaar, the collaboration is a strategic move to enhance security measures in an increasingly AI-driven world.

Potential New Dwarf Planet Discovery Challenges Planet Nine Hypothesis

The potential discovery of a new dwarf planet, 2017OF201, may provide further evidence for the existence of the theoretical Planet Nine, challenging previous beliefs about the Kuiper Belt.

A team of scientists from the Institute for Advanced Study School of Natural Sciences in Princeton, New Jersey, has announced the potential discovery of a new dwarf planet, designated 2017OF201. This finding could lend support to the theory of a super-planet, often referred to as Planet Nine, located in the outer reaches of our solar system.

The object, classified as a trans-Neptune Object (TNO), was located beyond the icy and desolate region of the Kuiper Belt. TNOs are minor planets that orbit the Sun at distances greater than that of Neptune. While many TNOs exist, 2017OF201 stands out due to its considerable size and unique orbital characteristics.

Leading the research team, Sihao Cheng, along with colleagues Jiaxuan Li and Eritas Yang, utilized advanced computational methods to analyze the object’s trajectory. Cheng noted that the aphelion, or the farthest point in its orbit from the Sun, is over 1,600 times that of Earth’s orbit. In contrast, the perihelion, the closest point to the Sun, is approximately 44.5 times that of Earth’s orbit, resembling Pluto’s orbital path.

2017OF201 takes an estimated 25,000 years to complete one orbit around the Sun. Yang suggested that its unusual orbit may have resulted from close encounters with a giant planet, which could have ejected it to a wider orbit. Cheng further speculated that the object may have initially been ejected into the Oort Cloud, the most distant region of our solar system, before being drawn back into its current orbit.

This discovery has significant implications for our understanding of the outer solar system’s structure. In January 2016, astronomers Konstantin Batygin and Mike Brown from the California Institute of Technology (Caltech) presented research suggesting the existence of a planet approximately 1.5 times the size of Earth in the outer solar system. However, this so-called Planet Nine remains a theoretical construct, as neither Batygin nor Brown has directly observed the planet.

The theory posits that Planet Nine could be similar in size to Neptune and located far beyond Pluto, in the Kuiper Belt region where 2017OF201 was found. If it exists, it is theorized to have a mass up to ten times that of Earth and could be situated as much as 30 times farther from the Sun than Neptune. Estimates suggest that it would take between 10,000 and 20,000 Earth years to complete a single orbit around the Sun.

Previously, the area beyond the Kuiper Belt was thought to be largely empty, but the discovery of 2017OF201 suggests otherwise. Cheng emphasized that only about 1% of the object’s orbit is currently visible to astronomers. He remarked, “Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system.”

Nasa has indicated that if Planet Nine does exist, it could help explain the peculiar orbits of certain smaller objects in the distant Kuiper Belt. As it stands, the existence of Planet Nine remains largely theoretical, with its potential presence inferred from gravitational patterns observed in the outer solar system.

This recent discovery of 2017OF201 adds a new layer to the ongoing exploration of our solar system and the mysteries that lie beyond the known planets.

According to Fox News, the implications of this discovery could reshape our understanding of celestial bodies in the far reaches of our solar system.

In Conversation with Supportiyo CEO on AI as a Digital Workforce

Supportiyo, co-founded by Ashar Ahmad, is transforming the home service industry by providing small businesses with an AI-driven digital workforce to enhance operational efficiency and reduce missed calls.

In an exclusive interview, Ashar Ahmad, co-founder and CEO of Supportiyo, discusses how the startup is revolutionizing operations for small businesses through applied artificial intelligence (AI).

Supportiyo, co-founded by Ahmad, is an applied AI startup focused on creating a digital workforce specifically for home service businesses. Unlike most AI tools that cater to large enterprises or technical users, Supportiyo aims to bridge the gap for small businesses that seek effective outcomes rather than complex tools.

The platform functions as a vertical AI phone agent for home service businesses, addressing one of the industry’s significant revenue leaks: missed calls. Supportiyo answers calls instantly, comprehends trade-specific language, manages customer objections, and books jobs directly into company calendars. This solution emerged from the collaboration between Ahmad, an AI engineer, and Ahmad M.S., a trades business owner who experienced firsthand the operational challenges faced by small businesses.

In the interview, Ahmad elaborated on Supportiyo’s mission and core purpose. “Supportiyo is an applied AI company building a digital workforce for home service businesses,” he explained. “Today, most advanced AI and automation tools are built for enterprises, engineers, or power users. Small business owners don’t want tools, workflows, or configuration platforms. They want work to get done.”

Ahmad emphasized that Supportiyo’s purpose is to transform existing AI capabilities into autonomous AI workers that take ownership of essential business functions. “These aren’t tools that merely assist people; they’re systems designed to actively perform work inside a business,” he noted. By identifying core workflows in home service businesses, Supportiyo creates AI workers capable of managing responsibilities from start to finish, delivering real return on investment without requiring business owners to learn new software or alter their operations.

When asked about the inspiration behind Supportiyo, Ahmad shared that the company was born out of a specific problem: missed calls. “As a builder and AI engineer, I saw how much capability already existed and how poorly it translated into real outcomes for small businesses,” he said. “When Ahmad, who was running a home service business at the time, became our first customer, the problem became very concrete. His business was losing revenue simply because calls were missed while technicians were in the field.”

Ahmad pointed out that the home services sector is one of the most underserved markets when it comes to technology solutions. While industries such as hospitality, banking, and education have access to various tools, home services have lagged behind. “Supportiyo exists to close the gap between modern technology and practical execution,” he added.

Supportiyo’s unique approach to trades businesses sets it apart from generic call-handling solutions. “We combine deep technical capability with real domain expertise,” Ahmad explained. “Most platforms give businesses ingredients—tools, workflows, prompts, and integrations—that owners are expected to assemble themselves. We take a different approach.” Instead of providing a kitchen full of tools, Supportiyo offers prebuilt, industry-specific AI workers that understand trade language, objections, scheduling logic, and operational nuances.

Feedback from early adopters has been overwhelmingly positive, with users expressing relief and trust in the system. An HVAC business owner noted that handling calls while working in the field was a significant challenge. After implementing Supportiyo, every customer was attended to and scheduled promptly, allowing the owner to step in only when necessary. A local food business shared that language barriers had previously hindered customer interactions, but Supportiyo learned their full menu and preferences, enabling smooth conversations and allowing the team to focus on their core work.

Ahmad highlighted that Supportiyo now manages close to 80% of inbound calls for some service business owners, providing them with more time to concentrate on growth. “Owners often describe Supportiyo not as software, but as an extra worker they can rely on,” he said.

When discussing how the AI handles objections and nuanced customer queries, Ahmad explained that the AI operates with full business context rather than relying on scripts or hardcoded prompts. “Each AI worker understands the specific business it represents, including services, pricing logic, availability, and policies,” he stated. This capability allows the AI to respond based on real business rules and past outcomes, ensuring accountability and effective resolution of customer inquiries.

Building Supportiyo has not been without its challenges. Ahmad noted that educating potential customers about AI’s capabilities is crucial before selling the product. “We first have to explain what AI can realistically do, what it replaces, and what outcomes owners should expect,” he said. Trust has also been a significant hurdle, as the AI category has been marred by flashy products that fail in real operations. Supportiyo addresses this by focusing on reliability, narrow responsibilities, and maintaining tight feedback loops with customers.

Ahmad described a typical customer journey, which has evolved from a hands-on onboarding process to a more streamlined experience. “Today, onboarding is fast and simple. A customer creates an account, selects their industry, connects their website, and activates an AI worker. Within minutes, calls are being handled,” he explained. For those seeking guidance, assisted onboarding allows customers to go live in under ten minutes. “The core principle is that the AI adapts to the business. The business does not adapt to the AI,” he added.

Looking ahead, Ahmad envisions Supportiyo becoming the default AI workforce for home service businesses within the next five years. “Platforms like Jobber and ServiceTitan helped move the industry from paper to software. Supportiyo moves it from software to autonomous AI workers,” he said. The goal is not to replace people but to alleviate operational burdens, allowing humans to focus on judgment, relationships, and growth. “Home services are just the beginning. The mission stays the same as we expand: applied AI that takes responsibility for real work and delivers measurable impact,” he concluded.

According to The American Bazaar, Supportiyo is poised to make a significant impact on the home service industry by providing small businesses with the tools they need to thrive in an increasingly competitive landscape.

-+=