US Supreme Court Declines Review of AI-Generated Art Copyright Case

The U.S. Supreme Court has opted not to address the copyright eligibility of art created by artificial intelligence, leaving lower court decisions intact.

The U.S. Supreme Court declined on Monday to consider whether art generated by artificial intelligence (AI) can be copyrighted under U.S. law. This decision comes in response to a case involving Stephen Thaler, a computer scientist from Missouri, who was denied copyright protection for a piece of visual art created by his AI technology.

Thaler had approached the Supreme Court after lower courts upheld a ruling from the U.S. Copyright Office, which stated that works produced by AI are ineligible for copyright protection due to the absence of a human creator. Thaler, based in St. Charles, Missouri, applied for federal copyright registration in 2018 for his artwork titled “A Recent Entrance to Paradise.” The piece depicts train tracks leading into a portal, surrounded by vibrant green and purple plant imagery.

In 2022, Thaler’s application was rejected on the grounds that copyright law requires a human author for creative works. The Supreme Court’s refusal to hear the case means that this decision remains in effect.

The Trump administration had previously urged the Supreme Court not to take up Thaler’s appeal. The Copyright Office has also denied copyright requests from other artists seeking protection for images generated with the AI platform Midjourney. Unlike Thaler, these artists claimed they deserved copyright for images they created with AI assistance, while Thaler argued that his AI system independently generated “A Recent Entrance to Paradise.”

A federal judge in Washington upheld the Copyright Office’s decision in Thaler’s case in 2023, emphasizing that human authorship is a fundamental requirement for copyright eligibility. This ruling was later affirmed by the U.S. Court of Appeals for the District of Columbia Circuit in 2025.

Thaler’s legal team expressed concern over the implications of the Copyright Office’s stance, stating, “Even if it later overturns the Copyright Office’s test in another case, it will be too late. The Copyright Office will have irreversibly and negatively impacted AI development and use in the creative industry during critically important years.”

The administration reiterated its position, noting that while the Copyright Act does not explicitly define the term “author,” various provisions indicate that it refers to a human rather than a machine.

This is not the first time the Supreme Court has declined to address issues surrounding AI and intellectual property. Thaler previously sought the Court’s intervention in a separate case regarding whether AI-generated inventions could qualify for U.S. patent protection. His patent applications were similarly rejected by the U.S. Patent and Trademark Office on grounds consistent with those applied to his copyright claims.

The Supreme Court’s decision not to engage with the complexities of AI-generated art and its copyright implications leaves significant questions unanswered, particularly as AI technology continues to evolve and permeate various creative fields.

As the debate over AI and intellectual property rights continues, the implications of these rulings may have lasting effects on artists, technologists, and the broader creative industry.

According to The American Bazaar, the Supreme Court’s decision underscores the ongoing challenges faced by creators and innovators in navigating the intersection of technology and copyright law.

Iranian Networks Experience Disruptions Amid Airstrikes, Highlighting Digital Conflict Evolution

A recent cyberattack during airstrikes on Iran underscores the increasing importance of digital warfare in modern conflicts, revealing vulnerabilities in global networks and offering critical cybersecurity lessons.

A significant cyberattack coincided with airstrikes on Iran, illustrating the evolving nature of warfare where digital conflicts play a crucial role. On February 28, 2026, during Operation Roar of the Lion, fighter jets and cruise missiles targeted Iranian Revolutionary Guard command centers. Simultaneously, a parallel cyber offensive reportedly unfolded, resulting in widespread disruptions across the nation.

As missiles rained down, Iran experienced a near-total digital blackout. Key media platforms and official news sites went offline, while government digital services and local applications failed in major cities. According to NetBlocks, a global internet monitoring organization, internet traffic in Iran plummeted to just 4 percent of normal levels, indicating either a state-ordered shutdown or a large-scale cyberattack aimed at crippling critical infrastructure.

Western intelligence sources later suggested that the cyber offensive was designed to disrupt the command and control systems of the Islamic Revolutionary Guard Corps (IRGC) and hinder their ability to coordinate counterattacks. This incident serves as a stark reminder that modern warfare increasingly intertwines airstrikes with digital assaults, creating repercussions that extend far beyond the battlefield.

Reports indicated widespread outages throughout Iran, with major news outlets such as the state-run IRNA going offline. Tasnim, a semi-official news agency aligned with the IRGC, even displayed subversive messages targeting Supreme Leader Ali Khamenei. The IRGC, which plays a pivotal role in Iran’s national security and regional operations, faced significant operational challenges as local apps and government services failed in cities like Tehran, Isfahan, and Shiraz.

This was not merely a case of a single website being defaced; the attack appeared systemic. Electronic warfare reportedly disrupted navigation and communication systems, while distributed denial of service (DDoS) attacks overwhelmed networks with excessive traffic, rendering them inoperable. Deep intrusions targeted critical sectors such as energy and aviation, further exacerbating the crisis. Even Iran’s isolated national internet struggled under the pressure.

For a regime that tightly controls information, losing digital command poses both operational and political risks. Cyber operations can achieve objectives without the immediate loss of life, allowing for disruption without triggering full-scale war—a vital consideration in a region where escalation can occur rapidly. Historically, Iran has demonstrated an understanding of this strategy, having previously targeted U.S. financial institutions and Saudi Aramco in cyberattacks between 2012 and 2014.

Following Israeli strikes in 2025, cyberattacks targeting Israel surged dramatically within days. Cyber retaliation provides leaders with a means to respond while minimizing direct military confrontation, thereby gaining leverage in negotiations without crossing critical thresholds.

However, there is a significant risk involved. Each cyber strike carries the potential for miscalculation, and damage to critical infrastructure can quickly escalate into real-world consequences. If the recent blackout and airstrikes mark a turning point, Tehran has several options, none of which are straightforward. Cyber retaliation remains one of Iran’s most adaptable tools, ranging from disruptive attacks to influence campaigns that pressure critical services.

Experts warn that U.S. cyber defenses and the private sector may face sustained challenges in the wake of these events. Iran has previously utilized drones and electronic interference as signals, with analysts noting the potential for jamming, spoofing, and harassment of unmanned systems to raise costs without directly targeting personnel.

The risks are escalating. An official from an EU naval mission reported that IRGC radio transmissions warned ships against passage through the Strait of Hormuz. Greece has advised vessels to avoid high-risk routes, citing concerns about electronic interference that could disrupt navigation. Insurers are already adjusting their policies, with reports of war-risk coverage being canceled or significantly increased.

Iran has historically collaborated with allied forces and militias in the region, and some of these groups may escalate attacks on U.S. interests or allied partners in retaliation, further widening the conflict without direct state-to-state engagement. While missile strikes remain a high-impact option, they also increase the likelihood of rapid escalation. Recent analyses suggest that Iran may use missile strikes as a signaling tool, particularly if its leadership feels cornered.

The uncomfortable reality is that neither Washington nor Tehran likely desires a full-scale regional war. In such moments, military strikes rarely occur in isolation; they are often accompanied by diplomatic efforts. Leaders send signals, apply pressure, and attempt to leave room for negotiations. However, escalation can gain momentum quickly. Each missile fired alters the equation, and each casualty raises the stakes, making it increasingly difficult to de-escalate.

Fear and pride play significant roles in these dynamics, as domestic audiences demand displays of strength. This pressure can lead to limited strikes spiraling into larger conflicts. The recent events highlight a broader trend: nation-states are increasingly pairing kinetic strikes with digital offensives. Cyberattacks can blind communications, freeze infrastructure, and disrupt financial systems long before the first explosion is registered.

This reality is crucial for businesses and individuals alike. Modern conflicts do not remain confined to battlefields; supply chains, energy grids, and online platforms can all feel the ripple effects. The blackout in Iran serves as a reminder that digital resilience has become a national security issue. When a country’s internet can drop to just 4 percent of normal traffic within hours, it underscores the rapid escalation potential of cyber conflicts. Even disruptions occurring overseas can have far-reaching consequences for interconnected global networks.

While geopolitics may be beyond individual control, personal digital hygiene can be managed. Practical steps to reduce risk during heightened cyber activity include installing strong antivirus software, keeping devices updated, using unique passwords stored in reputable password managers, enabling two-factor authentication, and being cautious with urgent headlines or alerts about international conflicts.

The reported cyber blackout in Iran may signal a new chapter in modern conflict. While jets and missiles remain significant, the importance of servers, satellites, and code cannot be overlooked. Leaders may attempt to contain damage while demonstrating strength, but history shows how quickly plans can unravel under pressure. Today, warfare operates on electricity and bandwidth as much as it does on fuel and ammunition. When networks go dark, the repercussions extend far beyond the battlefield, affecting banking systems, airports, hospitals, and personal devices.

This moment serves as a crucial reminder: if an entire nation’s digital systems can be disrupted in hours, how prepared is your community for a similar event? The implications of these developments are profound and warrant careful consideration.

According to Source Name.

Google Discontinues Dark Web Monitoring Service: What You Need to Know

Google has discontinued its Dark Web Report feature, which previously scanned for personal information breaches, leaving users to rely on alternative security tools for monitoring their data exposure.

Google has officially discontinued its Dark Web Report feature, a free service that once scanned known dark web breach dumps for personal information associated with users’ Google accounts. This tool provided notifications when email addresses and other identifiers appeared in leaked datasets.

According to Google’s support page, the dark web scanning ceased on January 15, 2026, with the reporting function removed entirely on February 16, 2026. As a result, users can no longer access this feature. The company stated that this decision reflects a shift toward security tools that offer clearer guidance after exposure, rather than standalone scan alerts.

For those who previously relied on the dark web scan as an early warning system for leaked data, this change removes a significant source of information. The Dark Web Report functioned as a basic exposure scanner, checking whether personal information linked to a Google account had surfaced in known breach collections circulating on the dark web.

When a match was found, users received a notification detailing the type of data that appeared in a leak. This could include an email address, phone number, date of birth, or other identifying details commonly harvested during large-scale hacks. However, the report did not display stolen credentials or provide access to the leaked database itself, nor did it trace the origin of the compromise beyond referencing the breached service when available.

After receiving an alert, users were responsible for taking the next steps. Google recommended actions such as changing passwords, enabling stronger authentication methods, and reviewing account security settings. With the removal of the tool, the automated breach check tied directly to a Google account is no longer available.

Google now directs users to its Security Checkup, a dashboard that scans accounts for weak settings and unusual sign-in activity. Additionally, its built-in Password Manager includes a Password Checkup feature that scans saved credentials against known breach databases and prompts users to change exposed passwords. Google also supports passkeys and two-factor verification to enhance account security.

The Results About You tool allows users to search for personal information in Google Search and submit removal requests for certain publicly indexed details. However, once personal information is compromised, it often ends up far beyond the initial breach. Stolen credentials and identity data are regularly trafficked on underground platforms where buyers can search for information tied to real individuals.

The BidenCash dark web marketplace was taken down by U.S. authorities in June 2025, with the Justice Department confirming that the platform sold stolen personal information and credit card data. These illicit markets operate with a level of organization comparable to legitimate online stores, offering search tools and bulk data sets that can be used to target online accounts. This makes credential stuffing easier, as attackers test leaked passwords across multiple services to gain unauthorized access.

A breach alert tied to a dark web scan indicates a leak at a specific moment in time; it does not track whether that information has been sold to third parties or used in subsequent fraud attempts. For everyday users, this means that simply knowing their data appeared in a leak does not provide much actionable insight.

With Google’s dark web scan now discontinued, some individuals may consider dedicated identity protection services. Many of these services offer continuous monitoring of personally identifiable information and send alerts about changes to credit reports from all three major U.S. credit bureaus. This can include notifications about new inquiries, newly opened accounts, and monthly credit score updates.

Beyond credit monitoring, certain services track linked bank, credit card, and investment accounts for unusual activity. They may also monitor public records for changes to addresses or property titles and alert users if their information appears in those filings. Many providers include identity theft insurance to help cover eligible out-of-pocket recovery costs, with coverage limits varying by plan and provider.

While no service can prevent every form of identity theft, ongoing monitoring and recovery support can facilitate a quicker response if personal information is misused. Google’s decision to drop its Dark Web Report may seem minor, but it eliminates a tool that many users relied on for early warnings about data breaches. Although Google continues to offer Security Checkup, Password Checkup, passkeys, and two-step verification, none of these actively scan dark web breach dumps for users.

Stolen data does not simply vanish; criminals copy, sell, and reuse it. An alert may indicate a single moment of exposure, but ongoing identity theft monitoring is essential for maintaining awareness over time. With the removal of Google’s dark web monitoring feature, users must now decide whether to actively check their data exposure or assume that someone else is monitoring it for them.

For more insights on identity protection and security, visit CyberGuy.com.

Ex-Twitter CEO’s Firm Block Plans to Cut Workforce by Nearly 50% with AI

Jack Dorsey’s company Block plans to lay off 4,000 employees, nearly half of its workforce, citing increased productivity from artificial intelligence tools.

Block, the financial technology company founded by former Twitter CEO Jack Dorsey, has announced plans to lay off 4,000 of its 10,000 employees. This decision is attributed to advancements in artificial intelligence (AI) that have significantly enhanced productivity within the company.

In a letter to shareholders on Thursday, Dorsey emphasized the transformative impact of AI on business operations. “Intelligence tools have changed what it means to build and run a company,” he stated. “We’re already seeing it internally. A significantly smaller team, using the tools we’re building, can do more and do it better. And intelligence tool capabilities are compounding faster every week.”

Despite the substantial layoffs, Dorsey assured stakeholders that the decision was not a reflection of financial instability. He pointed out that Block had performed well, exceeding Wall Street expectations with a reported total revenue of $6.25 billion for the fourth quarter. In a post on X, he explained that he faced two options: to gradually reduce the workforce over an extended period or to act decisively in the present.

“Repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead,” Dorsey wrote.

During the earnings call, executives noted that Block had been increasingly integrating AI into its operations for several years. They indicated that some AI initiatives were nearing full implementation, while others were still in earlier stages of development. This announcement follows a previous round of layoffs earlier in February, which had already seen hundreds of workers let go.

The decision to reduce the workforce by nearly half has drawn comparisons to the drastic measures taken by Elon Musk when he acquired Twitter (now X) in November 2022, where he cut approximately 50% of the staff in a single move. Dorsey, a co-founder of Twitter, has had a complex relationship with Musk, initially supporting his acquisition but later suggesting that Musk “should have walked away.”

In addition to his role at Block, Dorsey has been involved in the development of Bluesky, a decentralized alternative to Twitter, and has expressed strong support for Bitcoin.

The layoffs at Block have reignited discussions about the broader implications of AI on employment. Tech leaders, including Anthropic CEO Dario Amodei and Meta CEO Mark Zuckerberg, have raised concerns about the potential negative effects of AI on the workforce. A recent report from the research firm Citrini, released on February 22, outlined a scenario where the growth of AI could adversely affect the overall economy.

Conversely, some industry figures have cautioned against hastily attributing layoffs to AI. OpenAI CEO Sam Altman has pointed out that some companies may be “AI washing,” or misleadingly linking unrelated layoffs to advancements in AI technology.

Critics on X have challenged Dorsey’s narrative regarding the layoffs at Block. One user highlighted that the company’s workforce had more than tripled from 3,900 to 12,500 employees between December 2019 and December 2022, during the tech boom fueled by the pandemic. “Unwinding less than half an insane COVID overhiring binge has much more to do with Jack Dorsey’s managerial incompetence than whether AI is going to take your job,” the post read.

Another commenter suggested that Block had created “two parallel company structures during COVID” and was now consolidating them, framing the layoffs as a management correction rather than a revolutionary shift driven by AI. This user predicted that more companies might use “AI restructuring” as a pretext for decisions that were already in the works.

The developments at Block reflect ongoing tensions in the tech industry regarding the role of AI in shaping the future of work and the management strategies employed by companies navigating these changes. As the conversation continues, the implications for employees and the economy remain a focal point of concern.

According to The American Bazaar, the situation at Block serves as a critical case study in the evolving landscape of technology and employment.

Amazon Discontinues Development of Blue Jay Warehouse Robot

Amazon has discontinued its Blue Jay warehouse robot program, raising questions about the scalability of advanced robotics in logistics.

Amazon has quietly ended its Blue Jay warehouse robot program just months after its initial unveiling, which aimed to enhance same-day delivery capabilities. The multi-armed, ceiling-mounted robot was introduced in October as a significant advancement in warehouse automation.

Despite the initial excitement surrounding Blue Jay, the program faced considerable challenges that ultimately led to its discontinuation. While the core technology behind Blue Jay will be integrated into other projects, the robot itself will no longer be developed.

This abrupt decision prompts a critical inquiry: If Amazon, one of the world’s leading logistics companies, cannot successfully implement a high-profile robot at scale, what implications does this have for the future of artificial intelligence (AI) in practical applications?

Blue Jay was not merely an upgrade to existing conveyor belt systems; it was designed to recognize and sort multiple packages simultaneously using advanced AI-powered perception models. Amazon claimed that the system was developed in under a year, a remarkable feat aimed at increasing package throughput while alleviating worker strain in fulfillment centers.

However, despite its promising design, Blue Jay encountered significant engineering and cost hurdles. The robot’s ceiling-mounted configuration required intricate installation and seamless integration into Amazon’s Local Vending Machine warehouses, which are designed as expansive, automated structures. This rigidity in design likely became a liability, as modifications would necessitate extensive reconfiguration of hardware and infrastructure, a process that is both time-consuming and costly.

As a result, several employees who were involved in the Blue Jay project have transitioned to other robotics initiatives within the company. Although the Blue Jay robot itself has been shelved, Amazon continues to explore new avenues for improving its warehouse systems, with the underlying technology informing future designs.

Looking ahead, Amazon is shifting its focus to a new warehouse architecture known as Orbital. Unlike the older Local Vending Machine model, Orbital is modular, allowing for quicker deployment in various layouts. This adaptability is crucial as retail landscapes evolve, with customers increasingly expecting same-day delivery from urban centers, local stores, and grocery outlets.

Orbital could enable Amazon to establish micro-fulfillment centers in proximity to retail locations, including Whole Foods, thereby enhancing its competitive edge against rivals like Walmart, which already boasts a robust grocery network.

In conjunction with Orbital, Amazon is also developing a new robotics system called Flex Cell. Unlike Blue Jay’s ceiling-mounted design, Flex Cell will operate on the floor, indicating a strategic shift towards smaller, more flexible automation solutions tailored to the unpredictable nature of local retail environments.

For regular Amazon customers, the immediate impact of these changes may be minimal, as same-day and next-day delivery options remain a priority. However, the long-term implications of Amazon’s evolving robotics strategy could significantly influence order fulfillment speed, pricing, and the operational dynamics of local warehouses.

If Orbital proves successful, it could facilitate faster and more efficient deliveries. Conversely, if it encounters difficulties, the expansion of same-day delivery services could slow down or become more costly. This scenario underscores a broader truth about AI: while software can adapt rapidly through code updates, physical robots face challenges that require substantial investment and time to overcome.

The discontinuation of Blue Jay highlights a growing divide in the tech industry. While software-based AI is advancing at a remarkable pace, hardware development remains fraught with complexities. Robots must navigate real-world challenges such as gravity, friction, and unpredictable human interactions, where each error carries tangible costs.

Amazon’s decision to shelve Blue Jay does not signify a retreat from robotics; rather, it represents a recalibration of its approach. The company is betting on the success of modular, flexible systems over large, integrated machines. This strategic pivot could shape the future of e-commerce logistics.

Ultimately, the promise of faster delivery, improved availability, and enhanced local convenience remains intact for consumers. However, the journey to realize these ambitions involves navigating the intricate balance between AI aspirations and the constraints of physical reality.

As Amazon grapples with the challenges of implementing advanced robotics at scale, it raises an important question: How much of the AI revolution is still more vision than reality? This ongoing dialogue will shape the future of technology and logistics in the years to come, according to CyberGuy.

Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a face-mounted electronic tattoo, or “e-tattoo,” designed to monitor mental workload in high-stress professions using advanced brainwave technology.

In an innovative study published in the journal Device, scientists have introduced a groundbreaking electronic tattoo device, referred to as an “e-tattoo,” that can help individuals in high-pressure work environments monitor their brain activity and cognitive performance.

The research team, led by Dr. Nanshu Lu from the University of Texas at Austin, emphasizes that mental workload is a crucial element in human-in-the-loop systems, significantly affecting cognitive performance and decision-making processes. This device aims to provide a more cost-effective and user-friendly method for tracking mental workload, particularly in demanding fields such as aviation, healthcare, and emergency response.

Dr. Lu noted that the e-tattoo could be particularly beneficial for professionals like pilots, air traffic controllers, doctors, and emergency dispatchers, who often operate under intense stress. Additionally, the technology could enhance training and performance for emergency room doctors and operators of robots and drones.

The primary objective of the study was to develop a means of measuring cognitive fatigue among individuals in high-stakes careers. The e-tattoo is designed to be temporarily affixed to the forehead and is significantly smaller than existing monitoring devices.

Utilizing electroencephalogram (EEG) and electrooculogram (EOG) technologies, the e-tattoo measures both brain waves and eye movements. Traditional EEG and EOG equipment tends to be bulky and expensive, but the e-tattoo presents a compact and affordable alternative.

Dr. Lu explained that the device is designed to be as thin and flexible as a temporary tattoo sticker, allowing for comfortable wear while providing accurate readings. She stated, “Human mental workload is a crucial factor in the fields of human-machine interaction and ergonomics due to its direct impact on human cognitive performance.”

The study involved six participants who were tasked with identifying letters displayed on a screen. Each letter appeared sequentially at various locations, and participants were instructed to click a mouse when they recognized either the letter or its position from a previously shown set. The difficulty of the tasks increased progressively, and the researchers observed shifts in brainwave activity that indicated a heightened mental workload as challenges intensified.

The e-tattoo consists of a battery pack, reusable chips, and a disposable sensor, making it a practical solution for real-time monitoring. Currently, the device is a lab prototype, with a production cost of approximately $200.

Dr. Lu highlighted that further development is necessary before the e-tattoo can be commercialized. This includes enhancing the device’s ability to decode mental workload in real-time and validating its effectiveness with a larger group of participants in more realistic settings.

As the demand for effective stress management tools in high-pressure jobs continues to grow, the e-tattoo represents a promising advancement in cognitive performance monitoring, potentially transforming how professionals manage their mental workload.

According to Fox News, the e-tattoo could pave the way for improved performance and training in various high-stakes occupations.

Sheel Dodani Receives $100,000 Hackerman Award for Protein Research

Indian American scientist Sheel Dodani has been awarded the prestigious $100,000 Hackerman Award for her innovative research in protein technology aimed at enhancing human health and environmental sustainability.

Sheel Dodani, an Indian American scientist, has received the esteemed 2026 Norman Hackerman Award in Chemical Research from The Welch Foundation. This award, which includes a $100,000 prize and a bronze sculpture, recognizes her groundbreaking work in the field of engineered proteins, specifically their application as anion sensors in biological systems.

Dr. Dodani is an associate professor of chemistry and biochemistry at the University of Texas at Dallas. Her research has been described as “using creative and daring chemistry to engineer technologies” that significantly contribute to human health and environmental improvement. Fred Brazelton, chair and director of The Welch Foundation, praised her achievements, stating, “Dr. Dodani is using creative and daring chemistry to engineer technologies that can measure and manipulate anions in living systems for the betterment of human health and the environment.”

The Hackerman Award is named after the foundation’s former scientific advisory board chair and aims to honor the accomplishments of early-career chemical scientists in Texas who are committed to advancing the fundamental understanding of chemistry. The award not only highlights individual achievement but also underscores the importance of innovative research in the scientific community.

Dr. David Hyndman, dean of the School of Natural Sciences and Mathematics at UT Dallas, remarked on the significance of Dodani’s work, stating, “Sheel Dodani’s research is opening an important new window into the chemistry of life.”

Dodani’s research group has developed the first coherent suite of genetically engineered fluorescent proteins that serve as biosensors for inorganic anions. While much attention has been given to cations—positively charged particles that are crucial for biological processes—anions, or negatively charged particles, have not been as thoroughly explored. This gap in understanding is particularly notable given the vital role that anions play in various biological functions.

One prominent example of an anion is chloride, which is essential for regulating fluid balance, blood pressure, and pH levels in the human body. The biosensors developed by Dodani have revolutionized researchers’ ability to track and visualize the behavior and interactions of these biologically significant anions in real time within living systems.

By utilizing fluorescent biosensors, researchers can now observe how anions behave in cells, paving the way for new therapeutic avenues. This includes the potential identification of small molecules that could treat chloride channel dysfunctions associated with diseases such as cystic fibrosis.

Reflecting on her research journey, Dodani noted, “This work began with a fundamental question: How can we bind an anion in water?” She explained that her team turned to nature’s supramolecular machines—proteins—to find answers. Through protein engineering, they have unlocked new functionalities in fluorescent proteins that enable the observation of anion biology, which has traditionally been challenging to study directly in living cells.

Dodani expressed gratitude for the support from The Welch Foundation, stating, “The Welch Foundation gave us the opportunity to pursue this direction early on. At the time, there was no established framework for investigating anions in water, let alone in living systems. By integrating concepts from different disciplines, we have started to answer questions that were previously out of reach.”

The Welch Foundation plays a crucial role in providing resources that allow researchers like Dodani to take risks in their scientific inquiries. This support is vital for those who aim to tackle complex questions that could have significant implications for human health and the environment.

Born and raised in Plano, Texas, Dodani completed her Bachelor of Science in chemistry at UT Dallas. She then pursued her PhD at the University of California, Berkeley, followed by a postdoctoral fellowship at the California Institute of Technology. In 2016, she returned to UT Dallas as a faculty member in the School of Natural Sciences and Mathematics, where she continues to make impactful contributions to the field of chemistry.

According to The American Bazaar, Dodani’s innovative research not only enhances our understanding of anions but also holds promise for future advancements in medical and environmental applications.

Arvind KC Appointed to Lead Global Expansion Efforts at OpenAI

OpenAI has appointed Arvind KC, a former Google executive, as Chief People Officer to enhance talent acquisition and workplace culture amid the company’s rapid expansion.

OpenAI has announced the appointment of Arvind KC as its new Chief People Officer, marking a significant addition to the leadership team of one of the world’s most scrutinized artificial intelligence companies.

KC, who previously held executive roles at Google and Roblox, will oversee human resources and internal scaling efforts at OpenAI during a period of rapid growth in both headcount and global influence.

With a strong foundation in both technical and managerial disciplines, KC brings a unique perspective to the role. He earned a bachelor’s degree in chemical engineering from the University Institute of Chemical Technology (UICT) in Mumbai, India, a prestigious institution known for its rigorous engineering programs.

Following his education in India, KC moved to the United States to pursue an MBA with a focus on operations management from Santa Clara University. This combination of technical knowledge and strategic management has positioned him well for leadership roles in high-growth technology environments.

Throughout his career, KC has navigated the complexities of rapidly scaling organizations. Most recently, he served as Chief People and Systems Officer at Roblox, where he aligned workforce strategy with internal technical systems to support the company’s growth.

Before his tenure at Roblox, KC was a Vice President at Google, where he led global engineering teams. His experience in engineering-heavy roles at companies like Palantir and Facebook (now Meta) allows him to effectively communicate with the researchers and developers he will now manage.

In his new position at OpenAI, KC is tasked with humanizing the company’s rapid expansion, which is often viewed through the lens of its algorithms. His responsibilities will include overseeing global talent acquisition, employee development, and fostering a workplace culture that can withstand the scrutiny faced by the AI sector.

“Arvind’s experience leading global teams at some of the world’s most innovative companies will be invaluable as we continue to grow,” OpenAI stated, highlighting his proven track record in managing large-scale organizational transitions.

This appointment signals a maturation phase for the San Francisco-based firm as it transitions from a small research lab to a global commercial powerhouse. The emphasis on the “human” element of operations reflects a strategic priority for OpenAI as it seeks to attract and retain top talent in a competitive labor market.

KC is expected to bridge the gap between ambitious technical objectives and the everyday needs of a world-class workforce, ensuring that OpenAI remains an attractive destination for elite professionals.

According to The American Bazaar, this leadership change underscores OpenAI’s commitment to developing a robust organizational culture as it continues to expand its reach in the AI industry.

Apple Warns Users of Scam Emails Targeting App Passwords

A recent phishing scam impersonating Apple warns users of a fraudulent $2,990 PayPal charge, urging them to call a fake support number, prompting cybersecurity experts to issue warnings.

A new phishing scam targeting Apple users has emerged, featuring a deceptive email claiming that an app-specific password was generated for the recipient’s account. The email falsely states that the user authorized a $2,990.02 charge through PayPal and includes a confirmation number, urging the recipient to call a support number immediately. However, this message is a classic example of a phishing scam.

The email is designed to instill panic and urgency in recipients. It appears to be professionally crafted, using Apple branding and mentioning Apple Support. However, upon closer inspection, several red flags indicate that the message is not legitimate.

One of the most significant warning signs is the “To” field, which displays an email address that does not match the recipient’s actual Apple ID. Legitimate emails from Apple are sent directly to the email address associated with the user’s Apple ID. If the visible recipient address differs from yours, it is likely a mass-mailed or spoofed message, a common tactic used by scammers.

Scammers often use large sums of money, like the nearly $3,000 charge mentioned in this email, to provoke fear and prompt quick action from recipients. The goal is to create a sense of urgency that leads individuals to act without thinking critically about the situation.

The email also instructs recipients to call a specific phone number, which does not belong to Apple. Authentic Apple security communications typically direct users to log into their accounts directly rather than pressuring them to call an unfamiliar support line. If a recipient calls this number, they may be connected to a scammer who could extract personal information or financial details.

Additionally, the email contains links that appear to lead to official Apple resources, such as “Apple Account” and “Apple Support.” However, these links may be disguised, leading to malicious websites instead. It is crucial to avoid clicking on links in suspicious emails and instead navigate to official websites by typing the URL directly into a browser.

Another red flag is the mismatch between the email’s subject and its content. While the subject mentions an app-specific password, the body of the email suddenly shifts to discussing a PayPal transaction. This inconsistency is a common tactic used by scammers to heighten urgency and confusion.

The email begins with a generic greeting, “Dear Customer,” rather than addressing the recipient by name. This impersonal approach is typical of bulk phishing emails, which often lack the personalization found in legitimate communications from trusted companies.

Moreover, the email’s Reply-To field may show an address that appears to be from Apple, such as appleid-usen@email.apple.com. However, scammers can easily spoof sender information, making it look like the message is coming from a trusted source. Users should be cautious and evaluate all red flags collectively rather than relying solely on the sender’s address.

The language used in the email is also a telltale sign of a scam. Phrases like “You authorized a USD 2,990.02 payment to apple.com using PayPal” sound awkward and unnatural. Genuine Apple receipts typically reference specific products or subscriptions rather than vague payment notifications tied to password alerts.

Furthermore, the email may display a masked address or an unusual domain, such as relay.quickinvoicesus.com, which does not conform to standard Apple formatting. Legitimate Apple communications will reference the user’s Apple ID directly, not an unrelated invoice-style domain.

Scammers often create a sense of urgency by urging recipients to call immediately to report an unauthorized transaction. This tactic is a hallmark of phishing schemes, as legitimate companies encourage users to log in securely to their accounts rather than rushing them into calling a third-party number.

Once on the phone with a scammer, victims may be led to provide sensitive information or even financial details, resulting in losses that far exceed the fake $2,990 charge mentioned in the email.

If you receive an email of this nature, it is essential to take a moment to pause and assess the situation. Instead of clicking on links or calling numbers provided in the email, verify the details by visiting the official Apple and PayPal websites directly. If you did not generate an app-specific password and see no suspicious charges, you are likely safe.

To protect yourself from phishing scams, consider implementing a few smart habits. Enable two-factor authentication (2FA) on your Apple ID, PayPal, and email accounts. This additional layer of security can prevent unauthorized access even if someone guesses your password.

Always be cautious when an email urges you to call support or click on links. Instead, navigate directly to official websites by typing the addresses into your browser. Ensure that you have strong antivirus software installed on your devices, as it can help detect malicious links and block phishing sites.

Regularly update your software to fix vulnerabilities that attackers may exploit. Outdated software can make it easier for phishing and malware attacks to succeed. Additionally, avoid reusing passwords across different accounts, as this practice can put your entire digital life at risk if one account is compromised.

If you suspect that your email has been exposed in a data breach, consider using a password manager that includes a breach scanner to check for compromised credentials. Reducing the amount of personal information available online can also help decrease your risk of falling victim to phishing scams.

Lastly, report any suspicious emails to Apple at reportphishing@apple.com and mark them as phishing through your email provider. This action helps improve filters and protects others from becoming victims.

In the face of increasingly sophisticated phishing scams, it is vital to remain vigilant and informed. If you receive an email claiming to be from Apple regarding an app-specific password and a large PayPal charge, trust your instincts—it’s likely a scam. Always verify through official channels to protect your personal and financial information.

According to a PayPal spokesperson, “PayPal does not tolerate fraudulent activity, and we work hard to protect our customers from evolving phishing scams. We always encourage consumers to practice vigilance online and to learn how to spot the warning signs of common fraud.”

Astronauts Return to Earth After ISS Mission Rescues Stranded Crew

A NASA crew successfully splashed down in the Pacific Ocean after completing a mission to the International Space Station, marking the agency’s first Pacific landing in 50 years.

NASA astronauts Anne McClain and Nichole Ayers, along with international crew members Takuya Onishi from Japan and Kirill Peskov from Russia, returned to Earth on Saturday, splashing down in the Pacific Ocean off the coast of Southern California. The landing occurred at 11:33 a.m. ET in a SpaceX capsule, marking a significant milestone as it was NASA’s first Pacific splashdown in five decades.

The crew’s mission involved relieving two astronauts, Suni Williams and Butch Wilmore, who had been stranded aboard the International Space Station (ISS) for nine months. Their extended stay was due to issues with the Boeing Starliner capsule, which had experienced thruster problems and helium leaks. NASA ultimately deemed it too risky to return Williams and Wilmore in the Starliner, which flew back to Earth without a crew. Instead, the two astronauts returned home in a SpaceX capsule after their replacements arrived.

Wilmore announced his retirement from NASA earlier this week after a distinguished 25-year career. Reflecting on their mission, McClain expressed hopes that it would serve as a reminder of the power of collaboration and exploration, especially during challenging times on Earth. She shared her anticipation of enjoying some downtime upon her return, while her crewmates looked forward to indulging in hot showers and burgers.

This mission also marked a change for SpaceX, which opted to switch its splashdown locations from Florida to California to minimize the risk of debris falling on populated areas. After exiting the spacecraft, the crew underwent medical checks before being transported by helicopter to meet a NASA aircraft bound for Houston.

Steve Stich, manager of NASA’s Commercial Crew Program, expressed satisfaction with the mission’s outcome during a post-splashdown press conference. “Overall, the mission went great, glad to have the crew back,” he stated. “SpaceX did a great job of recovering the crew again on the West Coast.”

Dina Contella, deputy manager for NASA’s International Space Station program, echoed this sentiment, noting her happiness at seeing the Crew 10 team back on Earth. She remarked that the crew had orbited the Earth 2,368 times and traveled more than 63 million miles during their 146 days in space.

This successful mission underscores the ongoing collaboration between NASA and commercial partners like SpaceX, as they work together to advance human space exploration.

According to Fox News, the mission’s success highlights the resilience and adaptability of space travel in the modern era.

11 Indian-American Innovators Recognized in Forbes’ 250 Greatest Innovators

Forbes has recognized 11 Indian Americans in its “250 America’s Greatest Innovators” list, highlighting their significant contributions to technology and medicine as the nation celebrates its 250th anniversary.

Forbes recently unveiled its “250 America’s Greatest Innovators” list to commemorate the United States’ 250th anniversary, showcasing a diverse group of visionary founders and executives who are reshaping global technology and medicine. Among the honorees are 11 Indian Americans, whose groundbreaking work spans from the early days of the internet to the cutting-edge developments in generative AI.

Leading this distinguished group is Vinod Khosla, co-founder of Sun Microsystems and a prominent venture capitalist, who secured the No. 10 spot. Khosla is renowned for his “black swan” investing style, with early investments in OpenAI and green technology solidifying his reputation as a leading risk-taker in the industry.

Close behind Khosla are tech giants Satya Nadella and Sundar Pichai, who have been instrumental in “re-founding” Microsoft and Alphabet, respectively. Their leadership has pivoted these legacy companies toward an AI-first future, reflecting the transformative power of innovation in the tech landscape.

The Forbes list emphasizes that innovation is often a marathon rather than a sprint. Suma Krishnan, who ranks No. 127, has made significant strides in treating “butterfly skin” disease. She co-founded Krystal Biotech in her 50s to develop the first topical gene therapy, marking a pivotal moment in medical innovation.

Similarly, Jay Chaudhry, ranked No. 128, has been recognized for his pioneering work in “zero trust” cloud security at Zscaler, which has disrupted the traditional firewall industry and redefined security protocols in the digital age.

The Indian American diaspora continues to make substantial contributions to technical infrastructure. Neha Narkhede, co-founder of Confluent and now CEO of Oscilar, is celebrated at No. 155 for her work in real-time data streaming. At MIT, Sangeeta Bhatia, ranked No. 161, has been honored for her innovative approach to merging microchips with biology, revolutionizing drug testing methodologies.

The diversity of this group extends into the daily lives of millions. Aman Narang, who ranks No. 177, has transformed the restaurant industry with Toast’s management platform. Baiju Bhatt, at No. 183, has democratized retail investing through Robinhood and is now pivoting to space-based solar power with Aetherflux. Naval Ravikant, ranked No. 230, has broadened access to startup funding via AngelList, further contributing to the entrepreneurial ecosystem.

The final names on the list reflect a commitment to human equity and efficiency. Shiv Rao, ranked No. 235, has been recognized for his AI medical scribe, Abridge, which automates clinical documentation to alleviate physician burnout. Shan Sinha, at No. 202, has made significant contributions to data management and healthcare safety, while Shivani Siroya, ranked No. 238, has been lauded for her work with Tala, which utilizes mobile data to provide credit to the “unbanked” in emerging markets.

This impressive collection of 11 innovators underscores a robust pipeline of talent that has become essential to the American economy. Whether they began their journeys in a garage or now lead major conglomerates, these individuals have successfully transformed complex scientific and digital theories into everyday realities.

According to Forbes, the achievements of these innovators highlight the critical role that diverse perspectives play in driving progress and shaping the future.

Four Indian-American Researchers Selected as 2026 Sloan Research Fellows

Four Indian American researchers have been awarded the 2026 Sloan Research Fellowships, recognizing their contributions to science and innovation in their respective fields.

Four Indian American researchers have been named among the 126 recipients of the prestigious 2026 Sloan Research Fellowships. Aayush Jain, Arun Kumar Kuchibhotla, and Aditi Raghunathan from Carnegie Mellon University, along with Anand Natarajan from the Massachusetts Institute of Technology (MIT), have been honored for their exceptional research accomplishments.

The Sloan Research Fellowships, awarded annually by the Alfred P. Sloan Foundation, celebrate early-career researchers who demonstrate creativity and innovation in their fields. Each fellowship includes a two-year grant of $75,000, which can be utilized flexibly to support the fellow’s research initiatives.

Stacie Bloom, president and CEO of the Alfred P. Sloan Foundation, remarked, “The Sloan Research Fellows are among the most promising early-career researchers in the U.S. and Canada, already driving meaningful progress in their respective disciplines. We look forward to seeing how these exceptional scholars continue to unlock new scientific advancements, redefine their fields, and foster the well-being and knowledge of all.”

Aayush Jain serves as an assistant professor in the Computer Science Department at Carnegie Mellon University. His research focuses on theoretical and applied cryptography, particularly the mathematical foundations that ensure the security of modern cryptographic systems. Jain aims to identify new sources of computational hardness and strengthen the long-term security of encrypted computation, addressing critical gaps in post-quantum cryptography. Additionally, he is dedicated to training graduate students in foundational cryptographic theory.

Arun Kumar Kuchibhotla, an associate professor in the Department of Statistics and Data Science at Carnegie Mellon, tackles foundational challenges in statistical inference and predictive learning. His work has significant applications in machine learning and artificial intelligence, where he develops robust, “assumption-lean” frameworks for uncertainty quantification. Kuchibhotla’s research also contributes to financial time series forecasting and causal inference significance testing. He has pioneered “honest inference” procedures, such as the Hull-based Confidence Method (HulC), which maintain validity in high-dimensional and irregular settings where traditional methods often falter.

Aditi Raghunathan, also an assistant professor in the Computer Science Department at Carnegie Mellon, focuses on understanding the vulnerabilities of AI systems and developing models that are safe, accurate, and reliable in real-world applications. She leads the AI Reliability Lab, which is dedicated to creating trustworthy AI through rigorous analysis and principled methodologies. Raghunathan’s research has garnered recognition at prestigious conferences and plays a crucial role in promoting responsible AI system design and deployment.

Anand Natarajan, an associate professor in Electrical Engineering and Computer Science at MIT, is a principal investigator at the Computer Science and Artificial Intelligence Lab and the MIT-IBM Watson AI Lab. His research primarily revolves around quantum complexity theory, exploring the power of interactive proofs and arguments within a quantum framework. Natarajan’s work aims to evaluate the complexity of computational problems in quantum settings, assessing both the capabilities and the reliability of quantum computers. He holds a PhD in physics from MIT, along with an MS in computer science and a BS in physics from Stanford University. Before joining MIT in 2020, he was a postdoctoral researcher at the Institute for Quantum Information and Matter at Caltech.

The recognition of these four researchers underscores the significant contributions of Indian Americans in advancing scientific knowledge and innovation. Their work not only enhances their respective fields but also sets a foundation for future breakthroughs in technology and research.

According to The American Bazaar, the Sloan Research Fellowships continue to highlight the importance of supporting early-career scientists who are poised to make substantial impacts in their disciplines.

Indian-American Billionaire Vinod Khosla Criticizes Ro Khanna, Bernie Sanders on AI

Indian American billionaire Vinod Khosla criticized U.S. lawmakers Ro Khanna and Bernie Sanders for their warnings about artificial intelligence in a recent post on social media platform X.

Indian American billionaire Vinod Khosla has publicly expressed his discontent with U.S. lawmakers Ro Khanna and Bernie Sanders. In a recent post on X, Khosla launched a scathing critique of their warnings regarding the potential negative consequences of artificial intelligence (AI).

In his post, Khosla stated, “Bernie Sanders, Ro Khanna warn of AI’s potential negative consequences. Morons like Ro Khanna and Bernie Sanders will stop all the good AI can do to protect their religion. Good intentions but bad outcomes is ok for these socialists/commie.”

Vinod Khosla is a well-known Indian-American entrepreneur, venture capitalist, and technology investor. Born in 1955 in India, Khosla began his academic journey as an electrical engineer at the Indian Institute of Technology (IIT) Delhi, later earning a Master’s degree in Biomedical Engineering from Carnegie Mellon University. His career took off at Sun Microsystems, where he was part of the founding team that contributed to the company’s early success.

Khosla gained significant recognition as a co-founder of Kleiner Perkins Caufield & Byers, one of Silicon Valley’s most influential venture capital firms, focusing primarily on technology investments. In 2004, he established Khosla Ventures, which invests in clean technology, biotechnology, and disruptive startups. Known for his bold investment strategies and advocacy for technological innovation, Khosla has played a pivotal role in shaping the investment landscape of Silicon Valley, often taking high-risk bets that challenge conventional approaches.

The recent exchange between Khosla and the lawmakers followed a town hall meeting at Stanford University on February 20, 2026. During this event, Sanders articulated concerns that artificial intelligence is advancing at a pace that existing economic and political systems cannot adequately manage. He further questioned Silicon Valley’s assertions that AI will inherently deliver broad public benefits, recalling similar claims made during previous technological advancements that ultimately resulted in increased wealth and power concentration.

This clash between Khosla and U.S. lawmakers underscores a broader tension at the intersection of technology, policy, and societal oversight. It reflects the ongoing debate about how rapidly emerging technologies, particularly artificial intelligence, should be guided, regulated, and integrated into public life. Advocates like Khosla emphasize the transformative potential of AI in addressing complex global challenges, from healthcare innovations to energy efficiency. They argue that excessive regulation could stifle progress and limit the benefits that AI could provide.

On the other hand, critics such as Sanders and Khanna highlight the necessity for caution, stressing that technological advancements often outpace the social, economic, and ethical frameworks required for responsible management. Their concerns are rooted in historical patterns where technological optimism has sometimes led to concentrated wealth and power, along with unforeseen societal consequences.

The ongoing dialogue between Khosla and lawmakers illustrates the complexities surrounding the development and implementation of artificial intelligence, a technology that promises significant advancements but also raises critical ethical and regulatory questions.

According to The American Bazaar, this exchange is part of a larger conversation about the future of AI and its impact on society.

Spyware Can Take Control of Your Phone in Seconds

ZeroDayRAT spyware poses a significant threat to mobile users, enabling attackers to access personal data, including messages, location, and live camera feeds on both iPhone and Android devices.

In an age where digital security is paramount, the emergence of ZeroDayRAT spyware has raised alarms among mobile users. This sophisticated malware can compromise both iPhone and Android devices, granting attackers access to a wide range of personal information, including messages, notifications, location data, and even live camera feeds.

Unlike traditional malware that typically targets specific data, ZeroDayRAT functions as a comprehensive mobile compromise toolkit. Security researchers from iVerify, a mobile security and digital forensics company, have described it as a significant threat due to its extensive capabilities.

Once installed, ZeroDayRAT begins transmitting data back to a central dashboard controlled by the attacker. This dashboard allows cybercriminals to build detailed profiles of victims, tracking their daily activities, communication patterns, and app usage. Reports indicate that the dashboard even includes a live activity timeline, offering chilling insights into a user’s life.

What sets ZeroDayRAT apart from other malware is its advanced surveillance features. The spyware includes keylogging and live surveillance tools, enabling attackers to monitor users as they log into sensitive accounts or engage in private conversations. This level of intrusion is not merely hypothetical; it is a built-in capability of the spyware.

In addition to spying on personal communications, ZeroDayRAT targets financial applications directly. It reportedly includes tools designed to compromise digital payment systems such as Apple Pay and PayPal. The spyware can intercept banking notifications and utilize clipboard injection techniques to redirect cryptocurrency transactions to the attacker’s wallet. This means that even without full control of the device, the spyware can facilitate significant financial theft.

Alarmingly, ZeroDayRAT is openly marketed on platforms like Telegram, making it accessible to individuals without advanced hacking skills. This combination of power and accessibility heightens the threat it poses to mobile users.

Both Apple and Google have long warned against installing applications from outside their official app stores, as sideloading can weaken security measures. When users bypass these trusted platforms, they increase their risk of encountering spyware like ZeroDayRAT. Although no system is infallible, sticking to recognized app marketplaces can significantly reduce the chances of infection.

Advanced spyware is designed to remain hidden, often without triggering obvious warnings. However, there are subtle signs that may indicate an infection. Users should be vigilant for rapid battery drain, unexpected device heat, and unusual spikes in mobile data usage. Additionally, checking for unfamiliar apps or configuration profiles can help identify potential threats.

If users suspect their device may be compromised, it is crucial to act quickly. The first step is to disconnect from Wi-Fi and cellular data to prevent further data transmission to the attacker. Changing passwords should be done from a secure device, and enabling two-factor authentication (2FA) on all accounts is highly recommended.

Installing robust antivirus software on mobile devices can also help detect and remove malicious applications. Users should regularly review app permissions and remove any that seem unnecessary or suspicious. For iPhone users, checking for unknown configuration profiles in the settings is essential, while Android users should scrutinize installed apps and device administrator permissions.

In cases where a device is severely compromised, a factory reset may be necessary to eliminate the spyware. This process wipes the device clean, removing hidden malware components. However, users should back up only essential files and avoid restoring full system backups that could reintroduce malicious software.

Given that ZeroDayRAT specifically targets banking and cryptocurrency applications, users should closely monitor their financial accounts for any unusual transactions. If suspicious activity is detected, it is imperative to contact the bank immediately.

While the threat of spyware like ZeroDayRAT is unsettling, users can take proactive steps to safeguard their digital security. Only installing apps from trusted sources, avoiding links from unknown senders, and regularly updating operating systems can help mitigate risks. Additionally, utilizing reputable password managers and enabling 2FA can provide an extra layer of protection.

Ultimately, the responsibility for digital safety lies with users. By remaining cautious and informed, individuals can significantly reduce their risk of falling victim to spyware attacks. The question remains: Are tech companies and app stores doing enough to protect users from such sophisticated threats? This ongoing concern highlights the need for continued vigilance in the face of evolving cyber threats.

For more information on mobile security and to stay updated on the latest threats, visit CyberGuy.com.

Harvard Physicist Suggests Interstellar Object May Be Alien Probe

Harvard physicist Dr. Avi Loeb suggests that the interstellar object 3I/ATLAS may be more than a comet, potentially serving as an alien probe on a reconnaissance mission.

A massive interstellar object, known as 3I/ATLAS, has recently captured the attention of astronomers and scientists alike due to its unusual characteristics. Harvard physicist Dr. Avi Loeb has raised the possibility that this object could be more than just a typical comet, suggesting it may be on a reconnaissance mission.

Dr. Loeb, a science professor at Harvard University, expressed his concerns in an interview with Fox News Digital. “Maybe the trajectory was designed,” he said. “If it had an objective to sort of be on a reconnaissance mission, to either send mini probes to those planets or monitor them… It seems quite anomalous.”

The object was first detected in early July by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Chile. This discovery marks only the third time an interstellar object has been observed entering our solar system, according to NASA.

While NASA has classified 3I/ATLAS as a comet, Dr. Loeb pointed out an intriguing detail: an image of the object shows an unexpected glow appearing in front of it, rather than trailing behind, which is typical for comets. “Usually with comets, you have a tail, a cometary tail, where dust and gas are shining, reflecting sunlight, and that’s the signature of a comet,” he explained. “Here, you see a glow in front of it, not behind it.”

Measuring approximately 20 kilometers across, 3I/ATLAS is larger than Manhattan and is notably bright for its distance from the sun. However, Dr. Loeb emphasized that the most striking feature of this interstellar visitor is its trajectory.

“If you imagine objects entering the solar system from random directions, just one in 500 of them would be aligned so well with the orbits of the planets,” he stated. The object, which originates from the center of the Milky Way galaxy, is predicted to pass near Mars, Venus, and Jupiter—an event that, according to Loeb, is highly improbable to occur by chance. “It also comes close to each of them, with a probability of one in 20,000,” he added.

NASA has indicated that 3I/ATLAS will reach its closest point to the sun—approximately 130 million miles away—on October 30. Dr. Loeb remarked on the potential implications of the object’s nature, stating, “If it turns out to be technological, it would obviously have a big impact on the future of humanity. We have to decide how to respond to that.”

In a related note, astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics previously confused a Tesla Roadster launched into orbit by SpaceX CEO Elon Musk with an asteroid, highlighting the complexities of identifying celestial objects.

A spokesperson for NASA did not immediately respond to requests for comment from Fox News Digital.

According to Fox News Digital, the ongoing investigation into 3I/ATLAS may provide insights into the nature of interstellar objects and their potential significance in our understanding of the universe.

Are Social Media Platforms Operating Within Reasonable Guidelines?

Mark Zuckerberg’s recent testimony in a landmark social media addiction trial raises questions about the responsibility of tech companies in addressing addiction and mental health issues.

The term “reasonable” took center stage during Mark Zuckerberg’s recent testimony in a significant social media addiction trial held in Los Angeles Superior Court. The case, brought forth by a plaintiff who claims that social media platforms have contributed to her depression and suicidal thoughts, has drawn considerable attention to the ethical responsibilities of these companies.

As the trial unfolds, TikTok and Snapchat have already reached settlements, leaving Meta, the parent company of Facebook and Instagram, and Google’s YouTube as the remaining defendants. The implications of this case extend beyond the courtroom, as it raises critical questions about the role of social media in users’ mental health and well-being.

During the proceedings, Zuckerberg provided five hours of testimony, which concluded on February 18. Following his appearance, he exited the courthouse through a back door, a move that has sparked speculation about the pressures surrounding the case.

To gain a deeper understanding of the issues at play, Vikram R. Bhargava, an assistant professor of strategic management and public policy at the George Washington University School of Business, offers expert insight. Bhargava’s research focuses on the ethical and policy challenges posed by emerging technologies, including the dynamics of social media and technology addiction.

His work has been featured in prominent business ethics journals, addressing the responsibilities of tech companies in mitigating the risks associated with their platforms. Bhargava emphasizes the need for a clear definition of what constitutes “reasonable” conduct in the tech industry, particularly as it pertains to user engagement and mental health.

As the trial progresses, the outcomes could set important precedents for how social media platforms are regulated and held accountable for their impact on users. The case not only highlights individual experiences but also reflects broader societal concerns about the influence of technology on mental health.

For those interested in exploring this topic further, Bhargava is available for interviews. To arrange a discussion, please contact Claire Sabin at claire.sabin@gwu.edu.

This trial represents a pivotal moment in the ongoing conversation about the responsibilities of social media platforms and their role in society. As the legal proceedings continue, many are watching closely to see how the court will address the complex interplay between technology, addiction, and mental health.

According to GlobalNetNews, the outcomes of this case could have lasting implications for the future of social media regulation and the ethical obligations of tech companies.

Aalyria, Google Spinout Startup, Secures $100 Million in Funding

Aalyria, a startup spun out from Google, has secured $100 million in funding to enhance high-speed communication networks amid increasing U.S. government investment in defense technology.

Aalyria, a startup that emerged from Google in 2022, has successfully raised $100 million in a recent funding round led by Battery Ventures. This investment has elevated the company’s valuation to an impressive $1.3 billion.

Specializing in high-speed communication networks, Aalyria’s software is designed to improve service delivery across various environments, including land, sea, and space. This funding round coincides with a notable increase in U.S. government spending on defense technology and national security satellites, aimed at maintaining a competitive edge over China.

Google continues to hold a stake in Aalyria, which has attracted additional investment from firms such as J2 Ventures and DYNE.

Michael Brown, a general partner at Battery Ventures, highlighted the impact of SpaceX’s Starlink on the satellite industry. He noted that Starlink’s success in commercializing low Earth orbit satellites has heightened competitive concerns among satellite vendors. Starlink has been securing government contracts and appealing to consumers, particularly in regions underserved by traditional high-speed internet services. Brown stated, “They love Starlink but want alternatives, too.”

According to Brown, Aalyria plays a crucial role in this landscape. “When you have a diversity of satellite platforms, including in lower and mid-Earth orbit, the ability to route traffic between them has been nearly impossible. But they provide a seamless networking layer,” he explained.

Aalyria has already established contracts and secured research funding from a variety of partners, including Telesat, the U.S. Air Force, NASA, the Defense Department’s Defense Innovation Unit, the European Space Agency, and other government entities.

In the event of a natural disaster that disrupts ground-based cell towers, Aalyria’s Spacetime software enables a satellite communications network to quickly adapt and cover the affected area within seconds, rather than days. Brian Barritt, the company’s founder and technology chief, emphasized the importance of this capability, stating that in space, the software directs satellites in a constellation to automatically reconfigure to address gaps when other satellites are compromised.

Barritt acknowledged that one of the challenges in the market is that companies developing space-based networks often have significant investments at stake, leading them to consider building their own network orchestration solutions from the ground up. He noted that gaining their confidence can take time, but once they recognize the advantages of having their network operating system collaborate with others, orchestrate networks of networks, and monetize unused capacity, it can significantly shift the dynamics in Aalyria’s favor.

In addition to its software solutions, Aalyria offers Tightbeam, a laser-communication system that can be mounted on ships, planes, or other aircraft. This technology enables data transmission over distances exceeding 100 kilometers, achieving speeds comparable to those of fiber optic internet.

This funding round and the ongoing developments in Aalyria’s technology come at a pivotal time as the U.S. government increases its investment in defense and satellite technology, further solidifying the company’s position in the market.

According to The American Bazaar, Aalyria’s innovative approach to communication networks positions it as a key player in the evolving landscape of satellite technology.

Sundar Pichai Unveils $15 Billion AI Investment in India’s Visakhapatnam

Sundar Pichai announced a $15 billion investment in artificial intelligence during the AI India Impact Summit, highlighting Visakhapatnam’s emergence as a global AI hub.

During the AI India Impact Summit held in New Delhi, Sundar Pichai, the CEO of Google and Alphabet, announced a significant $15 billion investment aimed at advancing artificial intelligence (AI) in India. Pichai emphasized the transformative potential of AI and its role in shaping the future of technology, particularly in emerging economies.

Speaking on the fourth day of the summit, Pichai remarked on the remarkable evolution of Visakhapatnam, a coastal city that Google has chosen as a focal point for its AI initiatives. He noted that the city is poised to become a major center for AI development as part of Google’s long-term strategy in India.

“Through Visakhapatnam, I remember it being a quiet and modest coastal city brimming with potential. Now, in that same city, Google is establishing a full-stack AI hub, part of our $15 billion infrastructure investment in India,” Pichai stated. He expressed his surprise at the city’s transformation into a global AI hub, highlighting the hub’s future capabilities, including gigawatt-scale computing and a new international subsea cable gateway.

Pichai underscored the significance of AI as a transformative force, stating that it represents “the biggest platform shift of our lifetimes.” He believes that AI has the potential to accelerate progress across various sectors and help emerging economies overcome traditional barriers to growth.

“The product shows what’s possible when humanity dreams big, and no technology has me dreaming bigger than AI,” he said. Pichai pointed out that while the potential for AI is immense, achieving its benefits is not guaranteed and requires concerted effort.

He highlighted the role of AI in advancing scientific discovery, citing the groundbreaking work of Google DeepMind in protein structure prediction. “For 50 years, predicting protein structures was a grand challenge that stalled drug discovery. Demis Hassabis and his team at Google DeepMind asked an audacious question: how could we use AI to solve this? That question led to AlphaFold,” Pichai explained.

This breakthrough, which recently won a Nobel Prize, has condensed decades of research into an open-access database that is now utilized by over 3 million researchers in more than 190 countries. These researchers are leveraging the database to develop malaria vaccines, combat antibiotic resistance, and tackle other critical health challenges.

Pichai further elaborated on the diverse applications of AI within the scientific community, from cataloging DNA disease markers to creating AI agents that serve as partners in research. “We must be equally bold in tackling problems in regions that have lacked access to technology,” he stressed.

In conclusion, Pichai reiterated the importance of responsible and inclusive AI development, emphasizing the need to ensure that the benefits of this technology reach all segments of society. His remarks at the summit reflect a commitment to fostering innovation and addressing global challenges through AI.

This article was republished with permission from Free Press Journal.

Microsoft Appoints Asha Sharma as Gaming Chief Amid Nepotism Claims

Microsoft’s appointment of Asha Sharma as the new head of its gaming division has sparked controversy, with accusations of “Indian nepotism” emerging on social media.

Microsoft announced on Friday that Asha Sharma will succeed Phil Spencer as the executive vice president and chief executive officer of its gaming division. Spencer, who has been with the company for 38 years, is retiring, marking a significant leadership transition for the tech giant’s gaming business.

Sharma, who previously led product development for Microsoft’s artificial intelligence models and services, is stepping into a role that includes overseeing the Xbox brand. Her appointment comes as part of a broader strategy to integrate AI into Microsoft’s offerings.

However, the announcement was met with immediate backlash on social media, where some users criticized the decision to promote Sharma. A vocal minority accused Microsoft of engaging in “Indian nepotism,” a term that quickly gained traction across various gaming forums and platforms like X.

The leadership changes at Microsoft do not end with Sharma. Sarah Bond, who has been serving as president of Xbox, is also set to step down. Matt Booty, the current head of game studios, will transition to the role of chief content officer and report directly to Sharma.

In a company blog post, CEO Satya Nadella outlined the new leadership structure, emphasizing the next phase for Microsoft’s gaming business. Sharma’s experience in building consumer products was cited as a key factor in her selection for the role.

Sharma has a long history with Microsoft, having worked with the company for over a decade. She initially joined the marketing division before leaving in 2013. After spending time at Instacart and Meta, she returned to Microsoft two years ago to take on a senior leadership role focused on core AI products.

Despite her qualifications, Sharma’s promotion has faced scrutiny. Critics on X questioned her lack of direct experience in the gaming industry, with one user stating, “Asha Sharma, the new head of Xbox, is an AI executive with no background in gaming.” Another user linked her promotion to a broader anti-immigrant sentiment, arguing that Microsoft has become synonymous with “Indian nepotism.”

The criticism intensified, with some users pointing to Sharma’s LinkedIn profile to argue that she had never held a position for more than four years, questioning her long-term leadership experience. Others, however, defended the decision, asserting that a chief executive does not need to be a gamer to effectively lead a global gaming business. Some commentators suggested that the backlash against Sharma may reflect underlying racism toward Indians in the tech industry.

The timing of this leadership change is particularly complex for Xbox. Following years of fierce competition with Sony and Nintendo, Spencer acknowledged in 2024 that the Xbox One had “lost the worst generation to lose.” In response, Microsoft has made significant investments to expand its reach, including a $69 billion acquisition of Activision Blizzard, while also cutting more than 2,500 jobs and closing multiple studios since 2024.

In an email to staff, Sharma sought to reassure employees and long-time players, stating, “We will recommit to our core Xbox fans and players, those who have invested with us for the past 25 years, and to the developers who build the expansive universes and experiences that are embraced by players across the world.” She further emphasized a renewed commitment to Xbox, starting with the console that has shaped the brand’s identity.

The ongoing debate surrounding Sharma’s appointment highlights the complexities of leadership transitions in the tech industry, particularly in a landscape that is increasingly influenced by global talent and diverse backgrounds. As Microsoft navigates this new chapter, the implications of these changes will be closely watched by both industry insiders and consumers alike.

According to The American Bazaar, the reactions to Sharma’s promotion underscore the challenges that come with leadership changes in a competitive market.

Magure Achieves ISO Certifications for Reliable AI System Development

Magure, a UAE-based enterprise AI company, has achieved ISO 9001:2015, ISO/IEC 27001:2022, and ISO/IEC 42001 certifications, underscoring its commitment to building reliable and secure AI systems.

Magure, an enterprise AI company based in the United Arab Emirates, has announced a significant achievement: the attainment of ISO 9001:2015, ISO/IEC 27001:2022, and ISO/IEC 42001 certifications. This milestone highlights the company’s dedication to developing AI systems that are not only reliable but also secure and responsibly managed.

As organizations increasingly transition from experimenting with artificial intelligence to integrating it into mission-critical operations, trust has become a crucial factor for success. The need for quality, security, and responsible governance in AI deployment is now a foundational requirement rather than an optional consideration.

“As AI systems become more autonomous and deeply integrated into business operations, enterprises need more than innovation—they need assurance,” stated Akhil Koka, CEO of Magure. “These certifications validate the way Magure builds and manages AI systems and reinforce our mission to help enterprises scale AI with confidence, accountability, and long-term trust.”

With these certifications, Magure joins a select group of organizations worldwide and stands out as one of the early adopters in the UAE to demonstrate compliance with standards related to quality management, information security, and AI management systems. This accomplishment solidifies Magure’s position as a trusted partner for enterprises looking to deploy AI at scale.

As AI becomes increasingly embedded in core business functions, enterprises face growing challenges related to operational reliability, data security, regulatory compliance, and ethical oversight. The certifications obtained by Magure reflect a comprehensive approach to addressing these challenges throughout the entire AI lifecycle.

The ISO 9001:2015 certification for Quality Management Systems validates Magure’s quality management practices, ensuring that AI solutions are designed, delivered, and continuously improved through consistent and repeatable processes. This framework supports reliable, production-grade deployments for enterprises.

ISO/IEC 27001:2022 for Information Security Management Systems confirms that information security, privacy protection, and operational resilience are integral to Magure’s platforms and services. This certification safeguards enterprise data and AI operations throughout the AI lifecycle.

ISO/IEC 42001:2023, recognized as the world’s first international standard for Artificial Intelligence Management Systems, acknowledges Magure’s structured approach to managing AI responsibly. This certification embeds transparency, accountability, and oversight into the governance and operation of AI systems.

Together, these standards create a unified foundation for enterprise AI that can be trusted in real-world, regulated, and high-impact environments.

Magure’s ISO certifications align with the broader vision for responsible and secure AI adoption in the UAE. The principles embedded in ISO 9001, ISO/IEC 27001, and ISO/IEC 42001 closely reflect the expectations set by initiatives such as the UAE National AI Strategy 2031, the Dubai International Financial Centre’s data protection framework, and Dubai’s AI security policies. These frameworks emphasize trust, accountability, and resilience at the core of enterprise AI systems.

By aligning internationally recognized ISO standards with regional frameworks, Magure empowers enterprises operating in the UAE and beyond to adopt AI systems that are secure, well-governed, and designed for long-term trust.

Central to Magure’s platform strategy is MagOneAI, a unified, end-to-end agentic AI platform designed to assist enterprises in building, deploying, and managing autonomous AI applications that seamlessly integrate with existing data sources and operational workflows.

The three ISO standards are directly embedded into the operations of MagOneAI. Quality by design, aligned with ISO 9001, ensures that standardized, lifecycle-wide processes govern the design, deployment, monitoring, and improvement of agentic AI applications, delivering predictable performance from experimentation to production.

Security by default, aligned with ISO/IEC 27001, incorporates role-based access controls, encrypted data handling, environment segregation, continuous monitoring, and audit-ready logging to protect sensitive enterprise data as AI agents operate autonomously.

Responsible AI management, aligned with ISO/IEC 42001, introduces clear accountability and transparency into agent behavior, alongside policy-driven controls, risk management, and lifecycle governance. This ensures that AI systems remain observable, controllable, and compliant as they scale.

This integrated approach allows enterprises to move beyond isolated AI pilots and confidently deploy autonomous, production-grade AI systems.

The same ISO-aligned principles extend across Magure’s broader AI ecosystem. MagLabs, Magure’s use-case discovery and AI workflow environment, applies these standards from early experimentation through operational readiness. Additionally, MagVisionIQ, its computer vision platform, operates under the same disciplined quality, security, and responsible AI practices for real-world deployments.

Together, these platforms provide enterprises with a consistent and governed foundation for scaling AI without fragmentation as use cases grow in complexity and impact.

According to The American Bazaar, Magure’s commitment to these standards positions it as a leader in the responsible deployment of AI technologies.

Nobel Laureate Supports Musk and Gates on Future Job Reduction

As automation and artificial intelligence reshape the workforce, a Nobel laureate suggests that future generations may enjoy more free time and fewer traditional jobs.

On a serene morning in Stockholm, a Nobel Prize-winning physicist observes a robotic arm pouring coffee with remarkable precision. This small act serves as a microcosm of a much larger transformation taking place in the world of work.

“Your grandchildren will probably work less than you,” he states calmly. “Maybe a lot less.”

While offices outside buzz with activity and deadlines loom, inside research labs and warehouses, machines are increasingly capable of performing tasks that once required human intellect. From drafting emails and analyzing contracts to diagnosing illnesses and even generating software code, the capabilities of automation are expanding rapidly.

The pressing question many individuals find themselves pondering is no longer a matter of science fiction: If machines can do my job, what happens to me?

A Structural Shift, Not Just Another Tech Cycle

When Nobel laureates align their views with influential figures like Elon Musk and Bill Gates, it captures public attention. Several esteemed scientists, including theoretical physicist Giorgio Parisi, contend that the rise of artificial intelligence and robotics signifies a shift akin to the Industrial Revolution rather than merely an evolution of technology.

Musk envisions a future characterized by “universal high income,” where the necessity of work becomes optional. Gates similarly foresees AI systems generating “a lot of free time” by managing mundane tasks.

According to these Nobel physicists, productivity is set to soar, human labor hours will diminish, and the conventional notion of a lifelong job may not endure through the century. The trajectory they suggest points toward a future with significantly less compulsory work.

Automation Is Already Here

The evidence of this shift is evident and does not require a telescope to observe. Modern warehouses operate with fleets of autonomous robots, while call centers utilize AI agents to manage thousands of conversations simultaneously. Hospitals are deploying algorithms to analyze scans and identify anomalies.

Historically, automation has eliminated certain jobs while creating new ones; farmers transitioned to factory workers, and factory workers evolved into office employees. However, this time, the landscape may be different.

AI is not limited to replacing physical labor; it also takes on cognitive tasks. It can draft reports, design systems, optimize logistics, and even write self-improving code. Consequently, the economy may maintain or even increase productivity with fewer full-time workers, leading to a society that is richer in productivity but potentially poorer in traditional employment opportunities.

The Paradox of Abundance

Theoretically, this shift should yield greater prosperity. If machines can produce more with less human labor, everyone stands to benefit. Yet, wages remain tethered to hours worked, raising concerns about income distribution. Musk refers to this era as the “age of abundance,” while economists explore models for guaranteed income or taxation of AI-driven capital.

The more profound question, however, is psychological: What occurs when work ceases to be the organizing principle of daily life?

The Hidden Risk: Emptiness

Jobs, even those that are less than ideal, provide a structure to our lives—waking up, commuting, completing tasks, taking breaks, and experiencing small victories. Removing this structure can lead to a sense of disorientation.

The potential danger of a world with fewer jobs is not laziness but rather a sense of meaninglessness. Without intentional design, free time may devolve into passive consumption—endless scrolling, distractions, and algorithm-driven habits.

A Nobel laureate recently articulated this concern: “I’m not afraid of machines working. I’m afraid of humans forgetting what to do when they are not working.”

How to Prepare for a Low-Work Future

If automation continues on its current trajectory, preparation may shift from traditional career paths to resilience. Discussions among technologists, economists, and scientists often highlight three key themes:

First, individuals should cultivate skills driven by curiosity rather than solely for employment. Interests such as art, language, gardening, programming, and music can endure beyond the fluctuations of job markets.

Second, prioritizing financial stability over status can provide flexibility in a world characterized by shifting roles and shorter contracts.

Lastly, strengthening community ties becomes essential as traditional work structures weaken. Those who thrive may not be the busiest individuals today but rather those who have learned to navigate life without constant direction.

A Future That Feels Like a Long Sunday

Imagine a weekday that resembles a leisurely Sunday afternoon. Your AI assistant has efficiently sorted your inbox, autonomous vehicles glide silently outside, and grocery stores operate largely through automation.

You may still work, but perhaps only 10 to 15 focused hours per week, engaging in distinctly human activities such as creativity, empathy, negotiation, and invention. Income might derive from state support or productivity-sharing mechanisms, supplemented by flexible, chosen contributions.

This future will not arrive abruptly; rather, it will gradually unfold—one automated system at a time.

A Civilizational Crossroads

For centuries, technological advancements have reduced the need for physical labor. Electricity, machinery, and computing have consistently shortened work hours. We may now be approaching a pivotal moment where compulsory labor declines significantly.

The central challenge is no longer merely about how we earn a living but rather how we derive meaning when work is no longer the core of our identity. The traditional 40-year, full-time career may prove to be a fleeting historical phase.

The next phase prompts a deeper inquiry: If work becomes optional, what will give life its purpose?

As experts continue to analyze these shifts, the implications for society remain profound. Will AI eliminate most jobs? While many routine tasks are already automated, experts suggest that total human working hours may significantly decline. Will individuals personally lose their jobs? It is more likely that unstable, contract-based, or part-time work will replace lifelong employment. Which jobs are more resilient? Roles requiring complex human interaction, creativity, care, and physical presence tend to adapt more slowly to automation. Ultimately, whether less work is beneficial depends on income policy, social structures, and how individuals choose to utilize their newfound free time. Managed effectively, it could enhance well-being; poorly managed, it could exacerbate inequality and social disconnection.

These insights reflect the evolving landscape of work and the need for society to adapt to a future where the nature of employment is fundamentally transformed, according to GlobalNetNews.

The Start of the Robotaxi Price War: Key Insights and Implications

The emergence of robotaxis is reshaping urban transportation, with companies like Waymo leading the charge in a competitive market marked by significant price differences and mixed safety records.

In several American cities, the future of transportation is already here: you can summon a driverless car with just a tap on your smartphone. These autonomous vehicles offer a ride without the small talk, wrong turns, or the need to tip. A driverless ride from Waymo in San Francisco averages around $8.17, while a traditional Uber ride in the same city costs approximately $17.25. The robotaxi price war has officially begun.

Waymo, a subsidiary of Alphabet (Google’s parent company), is currently the leader in the driverless car market. The company has provided an impressive 15 million driverless rides since its inception, with current figures showing about 400,000 rides per week. Valued at $126 billion, Waymo’s services are available in several major cities, including Phoenix, the San Francisco Bay Area, Los Angeles, Austin, Atlanta, and Miami. By 2026, the company plans to expand its reach to Dallas, Denver, Washington, D.C., London, Tokyo, and more.

In contrast, Tesla, which launched its robotaxi service in Austin last June, has made slower progress. The company has deployed roughly 31 vehicles, and each ride still requires a safety monitor to be present. This level of supervision highlights the challenges Tesla faces in achieving full autonomy.

Amazon’s Zoox is another player in the robotaxi arena, introducing a unique pod that lacks a steering wheel and can drive in both directions. Currently, rides in Las Vegas and San Francisco are free as the company awaits regulatory approval to begin charging for its services.

Waymo’s technology relies on a combination of cameras, lidar (laser radar that creates a 3D map of the environment), and traditional radar, allowing it to operate effectively in total darkness and adverse weather conditions. In contrast, Tesla’s approach is more cost-effective, utilizing only cameras—eight in total—allowing them to offer rides at a lower rate of $1.99 per kilometer.

However, the safety of these autonomous vehicles remains a topic of concern. Waymo has reported 1,429 incidents to regulators since 2021, resulting in 117 injuries and two fatalities. The company asserts that it has 80% fewer injury crashes than human drivers, but the National Highway Traffic Safety Administration (NHTSA) has documented several safety issues, including three software recalls, one of which was issued last December for the vehicle’s failure to stop for stopped school buses.

Personal experiences with these robotaxis can vary significantly. One individual recounted a ride where the vehicle dropped her off a full mile from her intended destination, leaving her with no option to correct the course. With no human driver to assist, she was left at the mercy of the robotaxi’s navigation system.

When a robotaxi encounters a situation it cannot navigate, a human operator in a remote center can intervene by viewing the car’s cameras and guiding it through the confusion. During a Senate hearing, Waymo acknowledged that some of these remote operators are based in the Philippines, a revelation that did not sit well with lawmakers.

As urban transportation evolves, the economics of car ownership are also changing. With robotaxis operating for over 15 hours a day and costing less than traditional car expenses such as gas and insurance, the notion of owning a vehicle may soon feel akin to maintaining a gym membership that goes largely unused.

The future of driving appears to be steering toward a reality where no one is behind the wheel. For those who still believe self-driving cars are a thing of the future, it may be time to reconsider; the ride is already underway.

According to Fox News, the robotaxi landscape is rapidly changing, with companies vying for dominance in a market that promises to redefine urban mobility.

Panera Bread Data Breach Exposes Personal Information of 5.1 Million Customers

Panera Bread has confirmed a data breach that has exposed the personal information of approximately 5.1 million customers, prompting class-action lawsuits and concerns over identity theft.

Panera Bread has confirmed a significant cybersecurity incident that has compromised the personal information of millions of its customers. The hacking group ShinyHunters has claimed responsibility, stating that it stole a vast amount of customer records, leading to serious concerns for anyone who has interacted with the popular bakery chain.

Earlier this year, ShinyHunters added Panera Bread to its data leak site, initially asserting that it had stolen over 14 million customer records. The stolen data reportedly includes names, email addresses, phone numbers, home addresses, and account-related information. In response, Panera Bread acknowledged the breach, describing the exposed data as customer “contact information.” The company has since contacted law enforcement and taken steps to address the situation, although it has not disclosed specific technical details regarding the attack or whether customers need to take any immediate actions.

Even seemingly innocuous “contact information” can pose significant risks when it falls into the wrong hands. Such data can be exploited for identity theft, targeted phishing attacks, and social-engineering scams that are increasingly convincing.

ShinyHunters claims that the attackers accessed Panera’s systems through Microsoft Entra single sign-on (SSO). While Panera has not confirmed this assertion, it aligns with recent warnings from cybersecurity firm Okta about a rise in voice-phishing attacks targeting SSO platforms. In these attacks, criminals impersonate IT or helpdesk staff, pressuring employees to approve authentication requests or enter login credentials on fraudulent SSO pages. This method relies on human trust rather than technical vulnerabilities, making it particularly effective.

Initially, the claim of 14 million affected customers suggested a massive breach. However, researchers at Have I Been Pwned? later clarified that while the attackers stole 14 million records, this did not equate to 14 million unique individuals. After analyzing the leaked dataset, researchers estimate that the breach has impacted approximately 5.1 million unique customers. The exposed information includes email addresses, names, phone numbers, and physical addresses.

This distinction is crucial, but it does not eliminate the associated risks. Once data is publicly released, it can quickly circulate across criminal forums and be reused for malicious purposes for years to come.

ShinyHunters reportedly attempted to extort Panera Bread before releasing the stolen data. When those efforts failed, the group published a 760MB archive containing millions of customer records on its leak site. This incident reflects a broader trend in cybercrime, where many groups now focus on stealthily stealing data and threatening public exposure rather than deploying ransomware to lock systems. Such attacks are often faster, harder to detect, and can be just as profitable.

The breach has already led to legal repercussions, with multiple class-action lawsuits filed in U.S. federal court. These lawsuits allege that Panera failed to adequately protect customer data, claiming that the company knew or should have known about existing security vulnerabilities. The lawsuits seek damages, improved security practices, and long-term identity theft protection for affected customers. Panera has not publicly commented on the ongoing litigation.

This is not the first time Panera Bread has faced a significant security lapse. In 2018, a cybersecurity researcher revealed that the company had left millions of customer records exposed online in plain text, which subsequently led to lawsuits and settlements. Repeated breaches often indicate deeper systemic challenges, as large organizations can struggle to secure cloud services, identity systems, and employee access at scale. When attackers target identity platforms rather than infrastructure, a single misstep can expose millions of records.

As customers often remain unaware of the risks associated with such breaches until weeks or months later, it is essential to take proactive measures to limit the potential fallout from a breach. If you have ever created a Panera Bread account, it is advisable to reset your password immediately. If you have reused that password elsewhere, those accounts may also be at risk. Cybercriminals frequently test breached passwords across various platforms, including email, shopping, and banking sites.

Utilizing a password manager can help generate strong, unique passwords for each account and securely store them, eliminating the need to reuse credentials. Many password managers also provide alerts if your email or passwords appear in known data breaches, allowing for swift action to secure your accounts.

Implementing two-factor authentication (2FA) adds an additional layer of security during the login process, typically through an app or device you control. Even if someone obtains your password through phishing or a breach, 2FA makes it significantly more challenging for them to access your account.

Cybercriminals often follow up breaches with fake emails or in-app messages that appear to offer assistance or security updates. It is crucial to verify the sender’s identity and avoid clicking on links within such messages. When in doubt, access the app or website directly instead of responding to the message.

Identity theft becomes a genuine risk when names, email addresses, phone numbers, and physical addresses are exposed. Identity theft protection services can monitor your personal information, alert you if it appears on the dark web, and watch for attempts to open new accounts in your name. In the event of a breach, these services often provide recovery support to help freeze accounts, dispute fraudulent activity, and guide you through the cleanup process.

Scammers do not rely on a single breach; they often combine leaked data with information from data broker sites to create detailed profiles. Data removal services can assist in removing your phone number, home address, and other personal details from numerous sites, making it more difficult for criminals to target you with convincing scams or identity fraud.

The recent data breach at Panera Bread serves as a stark reminder that even well-known brands can become significant targets for cybercriminals. While the company asserts that only contact information was exposed, such data can still fuel scams and identity theft long after the initial headlines fade. Remaining vigilant and proactive in the wake of breach news is essential for safeguarding your digital life.

For further information on protecting your personal data and navigating the aftermath of a breach, consult resources from cybersecurity experts.

According to Fox News, the situation continues to evolve as Panera Bread addresses the fallout from this incident.

FDA Resumes Review of Moderna’s mRNA Influenza Vaccine

The FDA has agreed to review Moderna’s application for the first mRNA-based flu vaccine after initially declining to do so, following a meeting with the company.

The Food and Drug Administration (FDA) has reversed its earlier decision and will now review Moderna’s application for the first mRNA-based flu vaccine. This change comes after a Type A meeting between Moderna and the agency, where the company proposed full approval for adults aged 50 to 64, as well as accelerated approval for those 65 and older, contingent on additional studies involving seniors.

The FDA has set a target date of August 5 for completing its review, which could allow the vaccine to be available in time for the upcoming flu season. This decision marks a significant step in the development of mRNA technology for flu prevention, a field that has faced scrutiny and skepticism from various quarters.

Critics of mRNA technology, including Robert F. Kennedy Jr. and other officials from the U.S. Department of Health and Human Services, have previously expressed doubts about the efficacy and safety of mRNA vaccines for respiratory viruses. Their concerns have led to the withdrawal of some federal funding related to mRNA vaccine research.

As the FDA prepares to review Moderna’s application, experts from George Washington University (GWU) are available to provide insights into the implications of this decision and the potential impact of mRNA technology on public health. Faculty members include Elizabeth Choma, a pediatric nurse practitioner and clinical assistant professor; Jennifer Walsh, a clinical assistant professor focused on pediatrics and health assessment; and Emily Smith, an associate professor specializing in infectious diseases and epidemiology.

Other experts from GWU include Asefeh Faraz Covelli, an associate professor in the Family Nurse Practitioner program; April Barbour, an internist and associate professor of medicine; and Mia Marcus, an associate clinical professor and primary care provider. Additionally, Maria Portela Martinez, an assistant professor of emergency medicine, and Andrew Meltzer, a professor of emergency medicine and chief of the clinical research section, are also available for commentary.

David Diemert, the clinical director of the GW vaccine research unit, and Jose Lucar, an associate professor of infectious diseases, are among the other faculty members who can provide expert opinions on the evolving landscape of vaccine development. Kelly Gebo, the dean of the GW Milken Institute School of Public Health, brings her expertise as an infectious disease physician and epidemiologist, focusing on disparities in healthcare access and outcomes.

The reopening of the review process for Moderna’s mRNA flu vaccine underscores the ongoing evolution of vaccine technology and its potential role in combating seasonal influenza. As the FDA moves forward with its review, the medical community and the public will be closely watching the developments surrounding this innovative approach to flu vaccination.

For further insights and to schedule interviews with GWU experts, interested parties can contact Katelyn Deckelbaum at katelyn.deckelbaum@gwu.edu.

According to Newswise, this decision could pave the way for a new era in flu prevention.

Trendy Tech Terms Influencing Internet Culture in 2023

Five key tech terms—slop, burner accounts, shadowbans, clickbait, and targeted ads—are shaping the way users interact with social media and perceive online content.

If your social media feed feels noisier, stranger, or more manipulated than it used to, you’re not alone. The internet has developed its own language, and buzzwords are quietly influencing what you see, what you don’t see, and how companies target you. From viral “slop” content to shadowbans and targeted ads, these terms play a significant role in how information spreads and how platforms manage user accounts.

Understanding these five key phrases can help you navigate the complexities of your digital life and regain control over your online experience.

Slop: The Noise in Your Feed

The term “slop” refers to mass-produced, low-effort digital content that is often generated quickly by artificial intelligence or created solely for clicks and engagement. This type of content includes spammy articles, recycled videos, misleading thumbnails, and other materials that lack real value.

While slop may seem harmless, it can crowd out reliable information, spread misinformation, and overwhelm your feed with noise instead of useful content. Social media platforms often struggle to control slop because it is designed to manipulate algorithms.

Fortunately, you can take back control by curating your feed and filtering out the noise.

Burner Accounts: The Hidden Identities

A burner account is a secondary or anonymous social media account used to conceal a person’s real identity. Some individuals create burner accounts for privacy, while others use them for trolling, harassment, or secretly viewing content.

Because burner accounts are difficult to trace, they are frequently associated with online harassment, fake engagement, or manipulation of public conversations. While platforms attempt to detect suspicious behavior, many burner accounts still evade detection.

Being cautious with unknown accounts can help protect your safety online.

Shadowbans: The Silent Filters

A shadowban can affect not only creators but also what users see. Social media platforms sometimes limit the visibility of specific accounts, topics, or types of content without notifying users. This means that posts may be hidden, pushed lower in your feed, or never shown to you at all, even if you follow the account.

This type of filtering is often driven by algorithms designed to reduce spam, harmful content, or policy violations. However, it can also shape your perception of what is popular or trending without your awareness.

Understanding shadowbans can help you recognize how your feed is curated and the potential biases that may influence your online experience.

Clickbait: The Allure of Misleading Headlines

Clickbait refers to exaggerated, misleading, or emotionally charged headlines designed to attract attention and drive clicks. While some clickbait may be harmless, it often leads to low-quality or misleading content that fails to deliver on its promises.

Clickbait exploits curiosity, fear, or surprise—powerful emotional triggers that drive engagement. This tactic is commonly employed by low-quality publishers and viral content farms.

Being aware of clickbait can help you discern between valuable content and sensationalized headlines.

Targeted Ads: The Personalization of Advertising

Targeted ads utilize data about your behavior, searches, location, and interests to deliver personalized advertisements. This is why you might see ads related to something you recently searched for or discussed near your phone.

Advertisers build detailed profiles based on browsing activity, app usage, and online behavior to predict what you are most likely to buy or engage with. This reliance on data collection means that adjusting your privacy settings, limiting ad tracking, and regularly reviewing app permissions can reduce how much data advertisers use to profile you.

If targeted ads feel a little too accurate, it’s because data brokers are constantly collecting and selling your information. Beyond adjusting privacy settings, consider removing your personal data from broker sites to minimize the profile advertisers build around you.

The modern internet operates on more than just technology; it thrives on attention, algorithms, and influence. Understanding terms like slop, shadowban, and targeted ads can help you recognize how platforms shape your experience and how companies compete for your clicks. The more you understand these trends, the easier it becomes to filter out noise, protect your privacy, and maintain control over what you see online.

For further insights into trending internet terms or to have something explained, you can reach out at Cyberguy.com.

Wearable Robotics Transforming Human Mobility in Walking and Running

Wearable robotics, including Nike’s Project Amplify and the Hypershell X exoskeleton, are transforming how we walk and run, aiming to enhance movement rather than replace it.

In recent years, the field of robotics has expanded beyond the confines of factories and laboratories, making its way into our daily lives. Wearable robotics, which include powered footwear and lightweight exoskeletons, are emerging as a new consumer category designed to assist movement rather than replace physical effort.

Historically, innovations in sports technology have focused on enhancing speed and performance, often benefiting elite athletes. However, the focus is shifting towards accessibility and support for everyday users. Nike’s Project Amplify exemplifies this trend. Developed in collaboration with robotics partner Dephy, this system integrates a carbon plate within the shoe and a motorized cuff worn above the ankle. The cuff uses sensors to monitor stride patterns in real time, providing subtle assistance that feels natural and smooth, rather than forcing movement.

Previous attempts at creating powered footwear faced challenges due to the weight of batteries and motors, which made the devices feel cumbersome and unbalanced. Modern designs have addressed these issues by relocating energy storage to the ankle or hips, thereby reducing strain on the feet and improving overall balance. Enhanced battery technology and advanced motion sensors allow these systems to adapt to users’ strides dynamically, making the experience feel like an extension of the body. Nike aims for a commercial release of Project Amplify around 2028.

However, Nike is not the only player in this evolving market. The Hypershell X is another notable example, designed as a lightweight outdoor exoskeleton for hikers and long-distance walkers. This system wraps around the waist and legs, employing small motors to alleviate fatigue during climbs and on uneven terrain. The goal is straightforward: to help users go farther without feeling drained. Hypershell has also introduced the X Ultra, a more robust version tailored for steeper terrains and longer excursions, providing stronger assistance while remaining compact enough to wear under standard outdoor gear.

Dnsys has also entered the market with the X1 all-terrain exoskeleton, aimed at hikers and outdoor enthusiasts. Unlike earlier lab prototypes, the X1 has been successfully sold through crowdfunding and direct online orders, marking it as one of the early consumer-ready entries in the wearable robotics space.

Another innovative product is WIM from WIRobotics, a wearable robot that weighs approximately 3.5 pounds and supports natural hip movement while walking. This device is targeted at older adults, active individuals, and those recovering from minor injuries, providing assistance without the bulkiness of traditional medical devices.

The medical applications of wearable robotics have been developing for a longer time. Companies like Ekso Bionics and ReWalk have created powered exoskeletons that assist individuals with spinal cord injuries or strokes in standing and walking. These systems are primarily used in rehabilitation clinics and select personal mobility programs, demonstrating how wearable robotics have evolved from medical settings to consumer-oriented designs.

What unites these diverse products is a common goal: to actively assist movement rather than merely track it. Many individuals face barriers to physical activity that are not solely related to injury; hesitation often plays a significant role. Concerns about knee pain, fatigue, or the fear of slowing down others can deter people from engaging in physical activity. Wearable robotics aim to bridge this confidence gap by reducing fatigue and supporting joints, making movement feel more attainable for those who might otherwise avoid it.

Comparatively, the rise of e-bikes serves as a relevant analogy. Electric assistance has not eliminated cycling; instead, it has broadened the demographic of people who feel comfortable riding a bike. Similarly, powered footwear and wearable robotics could democratize walking and running, making these activities more accessible to a wider audience.

For some, this technology might mean replacing short car trips with walking, while for older adults, it could facilitate prolonged activity without excessive fatigue. Casual runners may find they can complete their workouts with energy to spare, rather than struggling through the final stretch. This shift is not about creating super athletes; it is about empowering more individuals to participate in physical activities.

Even if you are not inclined to use a powered exoskeleton or are not eagerly awaiting the arrival of motorized shoes in 2028, the implications of this technology are significant. For those who experience discomfort during long walks or skip runs due to fatigue concerns, wearable robotics are designed with these challenges in mind. The aim is not to transform anyone into a super athlete but to make movement feel more achievable.

For some, this could translate to walking an extra mile effortlessly, while for others, it might mean keeping pace with friends or feeling more confident about starting a new fitness routine. Wearable robotics are reshaping the conversation around fitness, shifting the focus from speed and performance to comfort and accessibility.

As wearable robotics continue to evolve, the question is not whether they will improve, but how society will choose to integrate them into daily life. If these technologies can help you walk and run with less strain, would you consider using them, or would you prefer to rely solely on your own efforts? This is a conversation worth having as we navigate the future of movement.

According to Fox News, the potential of wearable robotics to enhance everyday mobility is becoming increasingly clear.

Bill Gates to Meet Andhra Pradesh Chief Minister for Strategic Talks

Bill Gates is set to visit Amaravati, Andhra Pradesh, for strategic discussions with Chief Minister N. Chandrababu Naidu, focusing on health and artificial intelligence.

In a significant development highlighting the intersection of technology and governance, Bill Gates, co-founder of Microsoft and a prominent figure in the tech industry, is scheduled to visit Amaravati, the capital of Andhra Pradesh. His meeting with Chief Minister N. Chandrababu Naidu aims to explore opportunities for expanding cooperation in two critical areas: health and artificial intelligence (AI).

This visit underscores Gates’s ongoing commitment to global health and technological advancement while showcasing Andhra Pradesh’s ambition to emerge as a leader in these fields. As India rapidly advances its digital infrastructure and technological capabilities, the country has become a focal point for tech giants, thanks to its vast and diverse market.

Under Naidu’s leadership, Andhra Pradesh has been proactive in leveraging technology to enhance governance and public welfare. Naidu, often recognized as a tech-savvy leader, has played a crucial role in driving digital initiatives across the state, which include e-governance and smart city projects.

The discussions between Gates and Naidu are expected to focus on how AI can be utilized to improve healthcare delivery in the state. India faces numerous healthcare challenges, including a shortage of medical professionals and inadequate infrastructure, particularly in rural areas. AI holds the potential to address some of these issues by facilitating remote diagnostics, predictive analytics for disease outbreaks, and personalized medicine.

Gates’s insights, supported by the resources of the Bill & Melinda Gates Foundation, could be instrumental in developing solutions tailored to the specific needs of Andhra Pradesh. The meeting is also likely to explore collaborative projects that align with the Gates Foundation’s focus on global health issues, such as eradicating infectious diseases and enhancing maternal and child health.

Andhra Pradesh could serve as a pilot region for innovative health interventions that, if successful, might be scaled across India and other developing regions. Gates’s interest in AI aligns with a broader global trend, where technology is increasingly recognized as a catalyst for economic and social development.

AI, in particular, has the potential to revolutionize various sectors, from agriculture to education, offering unprecedented opportunities for growth and efficiency. For Andhra Pradesh, embracing AI could lead to improved agricultural productivity, enhanced educational outcomes, and more efficient public services.

This visit also reflects a symbiotic relationship between global tech leaders and regional governments. As tech companies seek to expand their presence in emerging markets, they find willing partners in governments eager to harness technology for development. This partnership is mutually beneficial: tech companies gain access to new markets and data, while governments receive the technological expertise and investment necessary to drive growth.

In conclusion, Bill Gates’s visit to Andhra Pradesh represents more than just a high-profile meeting. It symbolizes the potential for technology to transform societies and underscores the importance of strategic partnerships in realizing this potential. As Andhra Pradesh continues its journey toward becoming a tech-driven state, the insights and collaboration from Gates and his foundation could play a pivotal role in shaping its future. Both Gates and Naidu share a vision of leveraging technology for the greater good, and this meeting may mark a significant step toward achieving that vision.

According to GlobalNetNews.

AI Summit Sees Strong Attendance on Opening Day

The AI Summit in New Delhi attracted a significant crowd on its opening day, showcasing India’s growing role in the global artificial intelligence landscape.

The bustling metropolis of New Delhi, renowned for its vibrant culture and historic landmarks, has added another highlight to its profile by hosting the much-anticipated AI Summit. On its opening day, the conference drew an impressive crowd, reflecting the increasing interest and investment in artificial intelligence across India. The event served as a melting pot of innovation and collaboration, underscoring India’s expanding prowess in the AI sector.

India, with its vast pool of tech-savvy talent and a rapidly digitizing economy, has emerged as a formidable player in the global AI arena. The summit, held at the expansive Pragati Maidan, showcased this evolution. Attendees, ranging from industry leaders to tech enthusiasts, were greeted with a plethora of exhibits that highlighted the country’s advancements in AI technologies.

The significance of the summit extends beyond the impressive turnout. It marks a pivotal moment in India’s technological journey, as the nation seeks to position itself as a global hub for AI development. With a government eager to foster innovation and a private sector keen to capitalize on AI’s potential, the summit serves as a platform to bridge these ambitions. It is a space where ideas are exchanged, collaborations are forged, and future pathways are charted.

The opening day featured keynote speeches from prominent figures in the tech industry, both domestic and international. These speeches set the tone for the event, emphasizing the transformative potential of AI across various sectors, including healthcare, agriculture, finance, and education. The narrative was clear: AI is not merely a technological advancement but a powerful tool for societal change.

However, India’s AI journey is not without its challenges. As the country embraces this technology, it must navigate issues related to data privacy, ethical AI deployment, and the digital divide. The summit’s robust agenda, which includes panel discussions and workshops on these critical topics, indicates a proactive approach to addressing these concerns.

The event also highlighted the role of startups in driving AI innovation. India’s startup ecosystem, one of the largest in the world, is a hotbed of AI-driven solutions. Many of these startups were present at the summit, showcasing cutting-edge technologies that promise to revolutionize industries. Their participation underscores the entrepreneurial spirit fueling India’s AI ambitions.

International participation at the summit further emphasizes India’s growing influence in the AI sector. Delegates from various countries attended, exploring opportunities for collaboration and investment. This international interest reflects India’s strategic importance in the global tech landscape, particularly as nations seek to diversify their tech partnerships.

The AI Summit is more than just an exhibition; it is a reflection of India’s aspirations and capabilities. As the world grapples with the implications of AI, India is positioning itself not just as a participant but as a leader in shaping the future of this technology. The massive turnout on day one is a testament to the excitement and interest surrounding India’s AI journey.

As the summit progresses, it will be intriguing to see how the dialogues and discussions unfold, particularly in areas such as AI ethics, policy-making, and international collaboration. The outcomes of these conversations could significantly influence the trajectory of AI development in India and beyond.

In conclusion, the AI Summit in New Delhi is a landmark event that highlights India’s commitment to embracing and leading in the AI revolution. It is a celebration of innovation, a forum for critical discussions, and a catalyst for future growth. As the summit continues, all eyes will be on New Delhi, eager to see what the next chapter in India’s AI story will bring, according to GlobalNetNews.

Dhireesha Kudithipudi Leads First U.S. Open-Access Neuromorphic Computing Hub

Dhireesha Kudithipudi is spearheading the first open-access neuromorphic computing hub in the U.S. at the University of Texas at San Antonio, aiming to democratize artificial intelligence research.

Indian American computer scientist Dhireesha Kudithipudi is transforming the landscape of artificial intelligence (AI) in the United States. As the founding director of the MATRIX AI Consortium at the University of Texas at San Antonio (UTSA), she is at the forefront of launching THOR: The Neuromorphic Commons, the first open-access hub of its kind in the country.

Funded by the National Science Foundation, the THOR project seeks to democratize access to neuromorphic computing, a field that emulates the architecture of the human brain to process information. Unlike traditional silicon chips, which consume significant amounts of electricity regardless of the task, neuromorphic systems operate on an “event-based” model, activating only when new data is detected.

“THOR is the U.S. national hub for neuromorphic computing,” Kudithipudi stated. She also holds the Robert F. McDermott Chair in Engineering at UTSA. “We are democratizing the technology, expanding industry-academia partnerships, and serving as a catalyst for bringing neuromorphic computing closer to real-world applications.”

Historically, access to such advanced hardware has been limited to elite corporate laboratories or well-funded academic institutions. In contrast, UTSA’s new initiative functions similarly to a public library, allowing researchers and students nationwide to apply for free access to run experiments. This approach significantly lowers the barrier to entry for the next generation of engineers.

At the core of the hub is the SpiNNaker2 system, a substantial platform featuring approximately 400,000 processing elements. Developed in collaboration with SpiNNcloud, this hardware utilizes energy-efficient ARM-based cores, akin to those found in smartphones, to simulate the pulsing signals of biological neurons and synapses.

The practical implications of this energy efficiency are profound. According to the research team, neuromorphic chips have the potential to revolutionize medical devices. For instance, they could enable pacemakers to adapt in real-time to a patient’s physical distress or allow hearing aids to intelligently filter background noise without quickly draining their batteries.

In addition to energy savings, Kudithipudi and her colleagues are addressing the issue of “catastrophic forgetting,” a common flaw in AI systems where machines lose previously acquired knowledge when learning new information. By mimicking the brain’s “lifelong learning” capabilities, THOR could facilitate the development of AI that evolves continuously.

This initiative involves a nationwide collaboration, with contributions from experts at UT Knoxville, UC San Diego, and Harvard University. The official launch of THOR is scheduled for February 23, marking a significant milestone for UTSA’s newly established College of AI, Cyber and Computing.

For Kudithipudi, the overarching goal is to ensure that the future of computing is not only more powerful but also more accessible and sustainable for all.

The information for this article was sourced from The American Bazaar.

OnPhase Appoints Indian-American Sudarshan Ranganath as Chief Product Officer

OnPhase has appointed Sudarshan Ranganath as Chief Product Officer to enhance its AI-driven financial automation platform amid the evolving needs of modern finance departments.

OnPhase, a key player in the AI-driven financial automation sector, has announced the appointment of Indian American executive Sudarshan Ranganath as its new Chief Product Officer. In this pivotal role, Ranganath will guide the company’s product vision and execution, with a focus on scaling its unified platform to address the dynamic requirements of contemporary finance departments.

Ranganath joins the Tampa-based company at a time when digital transformation is rapidly reshaping the office of the CFO. With over 20 years of experience in business spend management and digital payments, he brings a wealth of knowledge in developing intelligent, cloud-based solutions designed to simplify complex financial workflows. His appointment is viewed as a strategic move aimed at enhancing OnPhase’s market presence and accelerating the adoption of its automated payment technologies.

“I am thrilled to be joining OnPhase at such an exciting time,” Ranganath stated, highlighting the transformative impact of AI on finance teams. He pointed out that CFOs are increasingly pressured to deliver strategic insights while maintaining stringent operational controls. Ranganath believes that OnPhase’s unified platform is essential for eliminating friction and reducing manual errors in financial processes.

Before taking on this new role, Ranganath served as Senior Vice President of Product Management and Strategy at Corcentric. During his tenure, he played a crucial role in driving revenue growth through both organic innovation and strategic acquisitions. He is also recognized for developing an AI-centric trading partner network aimed at modernizing B2B commerce.

Ranganath’s career includes leadership positions at notable companies such as Ellucian, Rivermine, and VeriSign, where he concentrated on SaaS transformations and international expansion. His extensive background in accounts payable and payment software aligns seamlessly with OnPhase’s core value proposition, as emphasized by Robert Michlewicz, CEO of OnPhase.

“He has worked at the intersection of product strategy, technology, and customer outcomes,” Michlewicz remarked. “His leadership will be instrumental as we take our platform and our company to the next level.”

For over 25 years, OnPhase has provided organizations with comprehensive tools to manage the entire lifecycle of an invoice, from capture to final payment. By consolidating these functions into a single platform, the company aims to eliminate the data silos that often hinder traditional finance departments.

Currently recognized on both the Deloitte Technology Fast 500 and the Inc. 5000 lists, OnPhase continues to establish itself as a leader in empowering finance leaders to operate with greater clarity and confidence, according to The American Bazaar.

India Showcases Technological Innovations at AI Impact Summit 2026

India is hosting the AI Impact Summit 2026, gathering global tech leaders to explore the transformative potential of artificial intelligence across economies, governance, and society.

As artificial intelligence (AI) approaches a pivotal role in reshaping human civilization, India is welcoming a summit of global tech leaders to discuss its implications for economies, governance, and society. The five-day Artificial Intelligence Impact Summit 2026 commenced on Monday evening, with Prime Minister Narendra Modi inaugurating the India AI Impact Expo 2026 at Bharat Mandapam, the summit venue in New Delhi.

In a post on X, Modi emphasized the significance of the summit, stating, “This is proof that our nation is making rapid progress in the fields of science and technology and is contributing significantly to global development.” He further highlighted the potential and capabilities of India’s youth, underscoring the nation’s commitment to harnessing AI for human-centric progress.

The theme of the summit, ‘Sarvajana Hitaya, Sarvajana Sukhaya,’ translates to “welfare for all, happiness for all,” reflecting India’s dedication to utilizing AI for the benefit of all citizens. The first day featured a leadership session focused on harnessing AI for the future of learning and work, examining how AI is reshaping global employment and redefining necessary skills.

Another significant session addressed the transformation of India’s judicial ecosystem through AI. Experts discussed the technology’s potential to enhance efficiency, transparency, and accessibility within the judicial system. Additionally, the summit included discussions on culturally grounded AI and social norms, emphasizing that AI systems often fail not due to technical limitations but because they overlook essential social contexts.

The future of employability in the age of AI is a central theme, with experts exploring how AI may create new job opportunities while rendering some existing roles obsolete, necessitating large-scale workforce reskilling. A special session titled “Artificial Intelligence for Smart and Resilient Agriculture – From Research to Solutions” aimed to gather diverse perspectives on how AI can support sustainable, efficient, and climate-resilient agricultural practices.

This summit is notable as the first global AI summit of its kind to take place in the Global South. It aims to foster a future where AI’s transformative impact serves humanity, drives inclusive growth, and promotes people-centric innovations to protect the planet.

The groundwork for the summit included five rounds of public consultations and global outreach sessions held in cities such as Paris, Berlin, Oslo, New York, Geneva, Bangkok, and Tokyo. The summit is anchored in three guiding principles: the Sutras of People, Planet, and Progress, which frame how AI should serve humanity, safeguard the environment, and promote inclusive growth.

Prior to the New Delhi summit, a strategic pre-summit gathering took place in Washington, D.C., where policymakers, technologists, diplomats, and founders convened to discuss “Co-Creating the Future: Global South–Global North Collaboration for AI Impact.” This gathering reinforced the notion that AI discussions can no longer be geographically concentrated.

The New Delhi Summit aims to chart a path toward a future where AI’s transformative power serves humanity, fosters social development, and promotes innovations that protect the planet. It also seeks to amplify the voice of the Global South, ensuring that technological advancements and opportunities are shared broadly rather than concentrated in a few regions.

However, the rapid proliferation of AI across society presents urgent challenges, including disruptions to traditional employment patterns, exacerbation of biases, and increased energy consumption. These developments underscore the need to move beyond aspirational frameworks and deliver measurable, concrete impacts that address both the promises and perils of AI.

OpenAI CEO Sam Altman, ahead of the summit, noted India’s tech talent, national strategy, and optimism about AI’s potential, stating that the country possesses “all the ingredients to be a full-stack AI leader.” In an article for The Times of India, he outlined three priorities for collaboration: scaling AI literacy, building computing and energy infrastructure, and integrating AI into real workflows.

Altman expressed OpenAI’s commitment to partnering with the Indian government to make AI and its benefits accessible to more people across the country. “AI will help define India’s future, and India will help define AI’s future. And it will do so in a way only a democracy can,” he wrote.

The AI Impact Summit 2026 represents a significant milestone in the global conversation surrounding artificial intelligence, highlighting India’s role as a leader in the technology’s development and implementation.

According to The American Bazaar, the summit is set to pave the way for a future where AI’s transformative capabilities are harnessed for the greater good.

Android Malware Disguised as Fake Antivirus App Targets Users

Cybersecurity experts warn that a fake antivirus app named TrustBastion is using Hugging Face to distribute Android malware that can steal sensitive information from users’ devices.

Android users should be on high alert as cybersecurity researchers have identified a new threat involving a fake antivirus application called TrustBastion. This malicious app exploits Hugging Face, a widely used platform for sharing artificial intelligence (AI) tools, to deliver dangerous malware that can capture screenshots, steal personal identification numbers (PINs), and display fraudulent login screens.

The TrustBastion app initially presents itself as a helpful security tool, claiming to offer virus protection, phishing defense, and malware blocking. However, once installed, it quickly reveals its true nature. The app falsely alerts users that their device is infected, prompting them to install an update that actually delivers the malware. This tactic, known as scareware, preys on users’ fears and encourages them to act without thinking.

According to Bitdefender, a global cybersecurity firm, the campaign surrounding TrustBastion is particularly concerning due to its deceptive nature. Victims are often misled by ads or warnings suggesting their devices are compromised, leading them to manually download the app. The attackers cleverly hosted TrustBastion’s APK files on Hugging Face, embedding them within seemingly legitimate public datasets, which allowed the malicious code to go unnoticed.

Once installed, TrustBastion immediately prompts users to download a “required update,” which is when the actual malware is introduced. Despite researchers reporting the malicious repository, Bitdefender noted that similar repositories quickly reemerged, often with minor cosmetic changes but maintaining the same harmful functionality. This rapid re-creation complicates efforts to fully eliminate the threat.

The malware associated with TrustBastion is invasive and poses significant risks. Bitdefender reports that it can take screenshots, display fake login screens for financial services, and capture users’ lock screen PINs. The stolen data is then transmitted to a third-party server, allowing attackers to drain bank accounts or lock users out of their devices.

Google has reassured users that those who stick to official app stores are generally protected against this type of malware. A Google spokesperson stated, “Based on our current detection, no apps containing this malware are found on Google Play.” Google Play Protect, which is enabled by default on Android devices with Google Play Services, helps safeguard users by warning them about or blocking apps known to exhibit malicious behavior, even if they originate from outside the Play Store.

This incident serves as a stark reminder of the importance of cautious app downloading practices. Users are advised to only download applications from reputable sources, such as the Google Play Store or the Samsung Galaxy Store, which have moderation and scanning processes in place. It is also crucial to scrutinize app ratings, download counts, and recent reviews, as fake security apps often garner vague feedback or experience sudden rating spikes.

Even the most vigilant users can fall victim to data exposure. Utilizing a data removal service can help eliminate personal information, such as phone numbers and email addresses, from data broker sites that criminals exploit. While no service can guarantee complete data removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and reducing the risk of follow-up scams and account takeovers.

To further enhance security, users should regularly scan their devices with Google Play Protect and consider backing up their protection with robust antivirus software. Although Google Play Protect automatically removes known malware, it is not infallible. Historically, it has not always been 100% effective in eliminating all malware from Android devices.

To safeguard against malicious links that could install malware and compromise personal information, users should ensure they have strong antivirus software installed across all devices. This software can also help detect phishing emails and ransomware, protecting personal information and digital assets.

Additionally, users should avoid installing apps from websites outside of official app stores, as these apps bypass essential security checks. It is vital to verify the publisher name and URL before downloading any application. Enabling two-step verification (2FA) and using strong, unique passwords stored in a password manager can also help prevent account takeovers.

Finally, users should remain cautious about granting accessibility permissions, as malware often exploits these to gain control over devices. This incident illustrates how quickly trust can be weaponized, with a platform designed for advancing AI research being repurposed to distribute malware. A fake antivirus app has become the very threat it claims to protect against, underscoring the need for users to scrutinize even seemingly trustworthy applications.

For those who have encountered suspicious activity on their devices, sharing experiences can help raise awareness. Users are encouraged to report their findings and concerns to relevant platforms.

According to Bitdefender, staying informed and cautious is the best defense against evolving cyber threats.

Astronauts Arrive at ISS for Eight-Month Mission Following Medical Emergency

Four astronauts arrived at the International Space Station for an eight-month mission, following an early evacuation due to a medical emergency last month.

Four new astronauts arrived at the International Space Station (ISS) on Saturday, restoring the lab to full capacity after a medical emergency forced an early evacuation of several crew members last month. The international crew, which includes NASA Commander Jessica Meir, launched from Cape Canaveral in a SpaceX rocket on Friday, embarking on a journey that lasted approximately 34 hours.

“That was quite the ride,” Meir remarked shortly after the launch, as reported by BBC News. “We have left the Earth, but the Earth has not left us.” The launch had faced delays due to weather concerns prior to takeoff.

Joining Meir for the next eight to nine months aboard the ISS are NASA astronaut Jack Hathaway, France’s Sophie Adenot, and Russian cosmonaut Andrei Fedyaev. Both Meir and Fedyaev have previous experience aboard the ISS, with Meir notably participating in the first all-female spacewalk in 2019. Adenot, a military helicopter pilot, is only the second French woman to travel to space, while Hathaway serves as a captain in the U.S. Navy.

NASA reported that the spacecraft is set to autonomously dock with the space station’s Harmony module at 3:15 p.m. CT on Saturday, traveling at a speed of 17,000 mph in Earth orbit. “What an absolutely wonderful start to the day,” said NASA Administrator Jared Isaacman following the launch. “This mission has shown in many ways what it means to be mission-focused at NASA.”

Isaacman also highlighted the recent adjustments made by NASA, including the early return of Crew-11 and the expedited launch of Crew-12, all while preparing for the upcoming Artemis 2 mission, which is scheduled to begin in early March.

This mission marks the 12th crew rotation with SpaceX as part of NASA’s Commercial Crew Program. Crew-12 will engage in scientific investigations and technology demonstrations aimed at preparing humans for future exploration missions to the Moon and Mars, as well as providing benefits for people on Earth.

After docking, the capsule’s hatch opened at 4:14 p.m. CT, allowing the crew to enter the space station. “We are so excited to be here and get to work,” Meir expressed upon arrival. Adenot added, “The first time we looked at the Earth was mind-blowing. … We saw no lines, no borders.”

Prior to the arrival of the new crew, only one American and two Russians remained at the space station, ensuring its continued operation. The medical evacuation that took place in January was the first of its kind in 65 years, as NASA reported that a crew member experienced a serious health issue. The agency has not disclosed the nature of the medical condition or the identity of the astronaut involved, citing medical privacy.

The astronaut who faced the medical emergency, along with three other crew members who had launched with them, returned to Earth more than a month earlier than planned after the decision was made to bring them home.

According to the Associated Press, the successful arrival of the new crew marks a significant step forward for ongoing research and exploration efforts aboard the ISS.

Superhealth Launches SuperOS, Claims First Agentic AI Hospital

Superhealth has introduced SuperOS, touted as the world’s first agentic AI operating system designed to manage hospital operations entirely, marking a significant advancement in healthcare automation in India.

Superhealth has launched what it claims to be the world’s first agentic AI operating system, named SuperOS, designed to manage a hospital from end to end. This initiative positions India as a potential leader in large-scale healthcare automation.

SuperOS is crafted as a comprehensive system that integrates nearly every aspect of hospital operations. According to the company, it encompasses everything from outpatient consultations and diagnostics to surgical workflows and discharge summaries. Varun Dubey, the founder of Superhealth, emphasized the platform’s capabilities, stating, “SuperOS is the world’s first agentic AI operating system built to actually run a hospital, from clinical decisions to operations, from labs to discharge, from OT assignments to auto prescriptions, it does it all.”

Dubey further explained that SuperOS understands the needs of doctors, nurses, and patients, as well as 15 Indian languages. The system orchestrates outcomes by facilitating real-time interactions between human staff and AI agents. “Only Superhealth could build this, because we are the only full-stack provider that designs, builds, and operates hospitals while also developing all the technology that runs them,” he added. “This is not software that merely assists healthcare. This is technology that operates healthcare.”

The introduction of SuperOS places Superhealth in the midst of global discussions about integrating AI into hospital systems. While many healthcare facilities are exploring AI tools for specific tasks, Superhealth is marketing SuperOS as a unified operating layer that connects clinical and administrative functions in real time.

According to the company, SuperOS serves as an intelligent framework across the hospital, coordinating tasks between AI agents and human teams. In outpatient departments, it acts as an ambient clinical co-pilot, providing patient history, assisting with differential diagnoses, drafting prescriptions for physician approval, and coordinating with lab technicians and pharmacists directly in the consultation room. The aim is to reduce wait times and enhance meaningful interactions between doctors and patients.

SuperOS is also integrated into radiology and pathology workflows. The platform replaces traditional Picture Archiving and Communication Systems (PACS) with cloud-based imaging systems and employs instant 3D volumetric analysis to aid in the detection of conditions in neurology, orthopaedics, chest trauma, and oncology. Superhealth claims that this integration reduces reporting time by 30 percent and effectively triples the capacity of specialists.

For inpatient and surgical care, SuperOS coordinates operating rooms, surgeons, and recovery workflows. It continuously monitors patients in both regular and intensive care units, utilizing personalized alerts, automating discharge summaries through a feature dubbed “Magic Discharge,” and conducting real-time audits of all clinical interactions to enhance medical quality.

Dubey framed the launch of SuperOS as part of a broader national ambition, stating, “India has a unique opportunity to show the world what real, meaningful healthcare AI looks like. SuperOS is built in India, for India, using Indian clinical data. It is also deployed in India and is focused on solving problems that matter to our country and our people.”

Superhealth is working to establish a network of 100 hospitals, supported by full-time senior clinicians, advanced infrastructure, and a zero-commission business model aimed at transparency and simplicity. Central to this expansion is SuperOS, which the company describes as operating seamlessly alongside healthcare professionals while enhancing efficiency across consultations, diagnostics, surgery, pharmacy, and recovery.

As hospitals worldwide face challenges such as staffing shortages, rising costs, and burnout, Superhealth is making a bold assertion that an AI-native operating system can transition from merely assisting care to actively managing it. The scalability of this model beyond India will be closely monitored by healthcare systems in the United States and other countries.

According to The American Bazaar, the implications of SuperOS could reshape the landscape of hospital management and patient care, setting a precedent for future innovations in healthcare technology.

Instagram Chief Defends App Design Amid Youth Mental Health Lawsuit

Adam Mosseri, head of Instagram, testified in a California trial addressing the platform’s impact on youth mental health, defending its design against claims of addiction and negligence.

Adam Mosseri, the head of Instagram, took the witness stand on Wednesday in a pivotal trial in Los Angeles that could significantly influence how Silicon Valley addresses the mental health of its youngest users.

During his testimony, Mosseri defended Instagram against allegations that the platform was intentionally designed to be addictive, particularly among young users, contributing to a mental health crisis among adolescents. The case was brought forth by a 20-year-old woman from California, identified as Kayle, who argued that the app’s “endless scroll” feature and instant gratification elements led to years of depression and body dysmorphia from an early age.

In response to the term “addiction,” Mosseri reframed the discussion, describing it as “problematic use” that varies from individual to individual. He also addressed internal communications from 2019 concerning face-altering “plastic surgery” filters. While some teams within the company raised concerns that these tools could harm the self-esteem of teenage girls, Mosseri and Meta CEO Mark Zuckerberg initially considered lifting a ban on such filters to promote user growth. Ultimately, the company decided to maintain the ban on filters that overtly promote cosmetic surgery.

“I was trying to balance all the different considerations,” Mosseri told the jury, according to reports from the courtroom.

Several parents who have lost children to the adverse effects of social media were present in the courtroom, sharing their grief as part of the ongoing case. Victoria Hinks, whose daughter died by suicide at the age of 16, stated that their children had become “collateral damage” in Silicon Valley’s “move fast and break things” culture. Outside the courthouse, she remarked, “Our children were the first guinea pigs,” a sentiment that Mosseri countered during his testimony by asserting that the “move fast and break things” motto, originally coined by Zuckerberg, is no longer applicable.

The plaintiff’s attorney, Mark Lanier, argued that the platform operates like a “slot machine in a child’s pocket,” designed to exploit developing brains for profit. He contended that Meta was aware of the psychological toll its platform could take but prioritized user engagement over the well-being of its young audience.

This trial serves as a critical “bellwether” for over 1,500 similar lawsuits filed across the country. It also tests the boundaries of Section 230, the federal law that typically protects platforms from liability for user-generated content. If the jury finds Meta negligent in its product design, it could lead to significant financial repercussions and compel substantial changes to social media algorithms.

Meta maintains that it has implemented numerous safety features for teens, including parental controls and time limits. Zuckerberg is expected to testify later this month as the trial continues to explore the complex relationship between technology profits and the vulnerability of the teenage mind, according to American Bazaar.

Back-to-Back Founder Exits Shake Elon Musk’s xAI Team

Elon Musk’s xAI is facing significant leadership changes as two co-founders recently departed, raising concerns about the company’s stability amid ambitious plans and regulatory scrutiny.

Elon Musk’s xAI is currently navigating a challenging period, marked by the recent departures of two co-founders within just two days. This leadership churn comes at a time when expectations for the company are exceptionally high, as Musk continues to promote bold ambitions for the future of artificial intelligence.

In the latest development, influential AI researcher Jimmy Ba announced his exit from xAI on Tuesday. In a post on X, Ba expressed gratitude for his early involvement, stating he was “grateful to have helped cofound at the start.” His departure follows that of fellow co-founder Tony Wu, who revealed his resignation just one day earlier.

The timing of these resignations is particularly notable, as they occurred shortly after xAI was merged with Musk’s aerospace company, SpaceX, earlier this month. This merger is reportedly part of SpaceX’s preparations for a public listing later this year.

Ba, who is a professor at the University of Toronto, played a significant role in developing research that informed xAI’s Grok 4 models. His exit adds to a growing list of senior departures from the startup, which has now seen six of its original twelve founders leave, five of them within the past year.

Other co-founders, including Igor Babuschkin, Kyle Kosic, and Christian Szegedy, have also exited the company. Additionally, Greg Yang announced last month that he would be scaling back his involvement to focus on his health, specifically dealing with Lyme disease.

The merger between xAI and SpaceX was structured as an all-stock transaction, valuing SpaceX at $1 trillion and xAI at $250 billion, according to documents cited by CNBC. Earlier, in March 2025, Musk utilized xAI in a separate all-stock deal to acquire his social media platform, X.

These leadership changes come amid increasing regulatory scrutiny for xAI in various regions, including Europe, Asia, and the United States. Investigations were initiated after xAI’s Grok chatbot and image generation tools were found to facilitate the large-scale creation and distribution of non-consensual explicit content, commonly referred to as deepfake pornography. This material included images of real individuals, including minors, raising alarms among regulators across multiple jurisdictions.

Musk founded xAI in 2023 with a team of 11 others, positioning the company as a competitor to OpenAI and Google in the rapidly evolving AI landscape. At its inception, xAI stated its mission was to “understand the true nature of the universe,” setting an ambitious tone for what Musk envisioned as a transformative venture.

In response to the recent departures, Musk quickly convened an all-hands meeting with xAI staff on Tuesday night. This meeting aimed to reset the narrative and outline a sweeping vision for the company’s future. According to reports from The New York Times, Musk told employees that xAI would eventually require a manufacturing base on the moon. He proposed the idea of building AI-powered satellites there and launching them into space using a massive catapult. “You have to go to the moon,” Musk stated, as reported by The New York Times.

Musk suggested that establishing a presence on the moon would provide xAI with access to computing capacity far exceeding that of its competitors. He implied that such advancements could unlock forms of intelligence that are currently difficult to conceptualize. “It’s difficult to imagine what an intelligence of that scale would think about,” he added, “but it’s going to be incredibly exciting to see it happen.”

As the company grapples with these leadership changes, Musk appears determined to refocus attention on xAI’s ambitious goals, including the potential for a public listing. The recent exits of key figures underscore the challenges facing the company, but Musk’s vision for the future remains steadfast.

According to The New York Times, the ongoing developments at xAI highlight the complexities of managing a rapidly evolving tech startup in an increasingly scrutinized industry.

NASA Finalizes Strategy for Sustaining Human Presence in Space

NASA has finalized its strategy for maintaining a human presence in space, focusing on the transition from the International Space Station to future commercial platforms by 2030.

This week, NASA announced the completion of its strategy aimed at sustaining a human presence in space, particularly in light of the planned de-orbiting of the International Space Station (ISS) in 2030. The agency’s document underscores the necessity of ensuring extended stays in orbit following the retirement of the ISS.

“NASA’s Low Earth Orbit Microgravity Strategy will guide the agency toward the next generation of continuous human presence in orbit, enable greater economic growth, and maintain international partnerships,” the document states.

The commitment to this strategy comes amid concerns regarding the readiness of new space stations. With the incoming administration’s focus on budget cuts through the Department of Government Efficiency, there are apprehensions that NASA may face funding reductions.

“Just like everybody has to make hard decisions when the budget is tight, we’ve made some choices over the last year to cut back programs or cancel them altogether to ensure that we’re focused on our highest priorities,” said NASA Deputy Administrator Pam Melroy.

Commercial space company Voyager is actively developing one of the potential replacements for the ISS. The company has expressed support for NASA’s strategy to maintain a human presence in space. “We need that commitment because we have our investors saying, ‘Is the United States committed?’” stated Jeffrey Manber, Voyager’s president of international and space stations.

The initiative to maintain a permanent human presence in space dates back to President Reagan, who emphasized the importance of private partnerships in his 1984 State of the Union address. “America has always been greatest when we dared to be great. We can reach for greatness,” he said, highlighting the potential for the space transportation market to exceed national capabilities.

The ISS, which has been continuously occupied for 24 years, was launched in 1998 and has hosted over 28 astronauts from 23 countries. The Trump administration’s national space policy, released in 2020, called for a “continuous human presence in Earth orbit” and stressed the need to transition to commercial platforms—a policy that has been maintained by the Biden administration.

“Let’s say we didn’t have commercial stations that are ready to go. Technically, we could keep the space station going, but the idea was to fly it through 2030 and de-orbit it in 2031,” NASA Administrator Bill Nelson remarked in June.

Recent discussions have raised questions about the continuity of human presence in space. “I just want to talk about the elephant in the room for a moment, continuous human presence. What does that mean? Is it continuous heartbeat or continuous capability?” Melroy noted during the International Astronautical Congress in October.

NASA’s finalized strategy has taken into account the concerns of commercial and international partners regarding the potential loss of the ISS without a commercial station ready to take its place. “Almost all of our industry partners agreed. Continuous presence is continuous heartbeat. And so that’s where we stand,” Melroy explained. “I think this continuous presence, it’s leadership. Today, the United States leads in human spaceflight. The only other space station that will be in orbit when the ISS de-orbits, if we don’t bring a commercial destination up in time, will be the Chinese space station. We want to remain the partner of choice for our industry and for our goals for NASA.”

Three companies, including Voyager, are collaborating with NASA to develop commercial space stations. Axiom signed an agreement with NASA in 2020, while contracts were awarded to Nanoracks, now part of Voyager Space, and Blue Origin in 2021.

“We’ve had some challenges, to be perfectly honest with you. The budget caps that were a deal cut between the White House and Congress for fiscal years 2024 and 2025 have left us without as much investment,” Melroy acknowledged. “So, what we do is we co-invest with our commercial partners to do the development. I think we’re still able to make it happen before the end of 2030, though, to get a commercial space station up and running so that we have a continuous heartbeat of American astronauts on orbit.”

Voyager has stated that it is on track with its development timeline and plans to launch its starship space station in 2028. “We’re not asking for more money. We’re going ahead. We’re ready to replace the International Space Station,” Manber asserted. “Everyone knows SpaceX, but there are hundreds of companies that have created the space economy. If we lose permanent presence, you lose that supply chain.”

Additional funding has been allocated to the three companies since the initial space station contracts, and a second round of funding could be crucial for some projects. NASA may also consider funding new space station proposals, including concepts from Long Beach, California’s Vast Space, which recently unveiled plans for its Haven modules, aiming to launch Haven-1 as soon as next year.

“We absolutely think competition is critical. This is a development project. It’s challenging. It was hard to build the space station. We’re asking our commercial partners to step up and do this themselves with some help from us. We think it’s really important that we carry as many options going forward to see which one really pans out when we actually get there,” Melroy concluded.

According to Fox News, NASA’s strategy reflects a commitment to ensuring a sustainable human presence in space as the agency navigates the transition from the ISS to future commercial platforms.

Microsoft ‘Important Mail’ Email Scam: How to Identify It

Scammers are increasingly impersonating Microsoft, sending deceptive emails that threaten account access to trick victims into clicking malicious links.

Scammers are becoming more sophisticated in their tactics, particularly when it comes to impersonating reputable companies like Microsoft. Recently, a fraudulent email claiming to be an urgent warning about email account access has raised alarms among users.

The email appears serious and time-sensitive, which is a common strategy used by scammers to provoke immediate action. A concerned individual named Lily reached out for assistance, expressing uncertainty about the validity of the message she received. She attached screenshots of the email, hoping for guidance.

It is crucial to note that this email is not from Microsoft; it is a scam designed to rush individuals into clicking dangerous links. The urgency of the message is a red flag that should not be ignored.

Upon closer inspection, several warning signs indicate that the email is fraudulent. For instance, it begins with a generic greeting, “Dear User,” rather than addressing the recipient by name, which is a standard practice for legitimate Microsoft communications.

The email claims that the recipient’s email access will be suspended on February 5, 2026. Scammers often exploit fear and urgency to cloud judgment and prompt hasty decisions.

Additionally, the email originates from an AOL address (accountsettinghelp20@aol.com), which is another significant indicator of its illegitimacy. Microsoft does not send security notifications from AOL or any other third-party email service.

Another alarming feature of the email is the phrase “PROCEED HERE,” which is designed to incite quick clicks. Legitimate Microsoft communications will always direct users to clearly labeled Microsoft.com pages.

Moreover, the email contains phrases like “© 2026 All rights reserved,” which scammers often copy and paste to create a false sense of authenticity. Genuine Microsoft account alerts do not include image attachments, making this another major warning sign.

If a recipient were to click on the link provided in the email, they would likely be redirected to a counterfeit Microsoft login page. This is a tactic used by attackers to steal personal information, including email credentials, which can lead to further scams and identity theft.

To protect yourself from such scams, it is essential to take a cautious approach when encountering suspicious emails. Here are some steps to consider:

First, do not click on any links, buttons, or images in the email. Avoid replying to the message, and be cautious even when opening attachments, as they can trigger malware or tracking mechanisms.

Ensure that you have strong antivirus software installed and that it is up to date. This software can help block phishing attempts, scan attachments, and alert you to dangerous links before any damage occurs.

If you receive an email like this, report it and delete it from your inbox. There is no reason to keep it, even in your trash folder.

For peace of mind, open a new browser window and navigate directly to the official Microsoft account website. Sign in as you normally would; if there is a legitimate issue, it will be displayed there.

If you accidentally clicked on any links or entered your information, change your Microsoft password immediately. Use a strong, unique password that you do not use elsewhere. A password manager can help generate and securely store your passwords.

Additionally, check if your email has been exposed in previous data breaches. Some password managers include built-in breach scanners that can alert you if your email address or passwords have appeared in known leaks. If you find a match, change any reused passwords and secure those accounts with new, unique credentials.

Enabling two-factor authentication (2FA) for your Microsoft account adds an extra layer of security, making it more difficult for attackers to gain access even if they have your password.

Scammers often gather information about potential targets through data broker sites. Using a data removal service can help minimize the amount of personal information available online, reducing your vulnerability to phishing attempts.

While no service can guarantee complete removal of your data from the internet, a data removal service can effectively monitor and erase your personal information from numerous websites, providing peace of mind.

Utilize your email app’s built-in reporting tool to help train filters and protect other users from encountering the same scam.

When Microsoft genuinely needs your attention, the communication will look very different from these scams. Recognizing the contrast can make it easier to identify fraudulent messages.

Scammers rely on urgency to distract and manipulate individuals, especially when it comes to something as central to our lives as email. The good news is that taking a moment to pause and verify can make a significant difference.

Lily’s decision to seek help before acting was a wise move that could prevent identity theft and account takeovers. Remember, emails that threaten account shutdowns and demand immediate action are almost always illegitimate. When faced with urgency, take a step back, verify independently, and never let an email rush you into a mistake.

If you have encountered a fake Microsoft warning or a similar scam, share your experience with us at Cyberguy.com.

For more information on protecting yourself from scams, consider signing up for the free CyberGuy Report, which offers tech tips, urgent security alerts, and exclusive deals delivered directly to your inbox.

According to CyberGuy.com, staying informed and cautious is key to safeguarding your digital life.

Ring’s AI Search Party Aims to Locate Lost Dogs More Efficiently

Ring has launched its AI-powered Search Party feature nationwide, enabling users to leverage nearby cameras to quickly locate lost dogs, even if they do not own a Ring device.

Ring has expanded its AI-powered Search Party feature across the United States, allowing anyone to utilize nearby cameras to help locate lost dogs more efficiently.

Losing a dog can be a distressing experience, often leading to frantic searches around the neighborhood and constant refreshes of local social media groups in hopes of finding a clue. To alleviate some of this stress, Ring aims to transform entire communities into additional eyes through the power of artificial intelligence. The Search Party feature now enables users to tap into a network of outdoor cameras to spot missing pets, and for the first time, it is accessible to anyone, regardless of whether they own a Ring camera.

Search Party is designed as a community-driven tool that expedites the reunion of lost dogs with their families. When a user reports a missing dog in the Ring app, nearby outdoor Ring cameras utilize AI to scan recent footage for potential matches. If a possible match is identified, the camera owner receives an alert containing a photo of the lost dog and a video clip. They can then choose to either ignore the alert or assist in the search, ensuring that sharing remains optional and pressure is minimized.

This update marks a significant shift in the functionality of Search Party. Previously, only individuals with Ring devices could access this feature. Now, anyone in the U.S. can download the free Ring Neighbors app, register, and post a lost dog alert. This change allows dog owners to connect with an existing network of cameras without the need for additional hardware or subscription fees. Neighbors without cameras can also contribute by sharing alerts and keeping an eye out for sightings.

Lost pets are already one of the most common types of posts in the Ring Neighbors app, with over 1 million reports of lost or found pets shared last year. Given that approximately 60 million households in the U.S. own at least one dog, the potential impact of Search Party is substantial.

Getting started with Search Party is straightforward. Users can download the Ring app for free from the App Store or Google Play. Once registered, anyone can create a Lost Dog Post in the app. If the post meets the necessary criteria, the app guides users through the steps to activate Search Party. This process involves sharing photos and basic information about the missing dog, after which nearby cameras will begin scanning automatically.

Search Party alerts are temporary. When a user initiates a Search Party in the Ring app, it operates for a few hours. If the dog remains missing, the user must renew the Search Party or start a new one to ensure that nearby cameras continue their search for matches. Once the dog is found, users can update their post to inform the community that the search is over.

The AI technology behind Search Party aims to reunite lost dogs with their owners efficiently. If an outdoor Ring camera detects a potential match, the camera owner is notified with an alert that includes a photo of the missing dog and a video clip. The camera owner retains control throughout the process, deciding whether to share footage or contact the owner through the app, all while keeping their phone number private.

Ring reports that Search Party has already yielded impressive results. In one instance, a woman named Kylee from Wichita, Kansas, was reunited with her mixed-breed dog, Nyx, just 15 minutes after he escaped through a small hole in her backyard fence. A neighbor’s Ring camera captured footage of Nyx and shared it through the app, providing Kylee with her only lead. “I was blown away,” Kylee said, emphasizing that even dogs with microchips can go unrecognized if they lack a collar. She credits the shared video for Nyx’s swift return, stating that she likely would not have found him without the Ring app.

Nyx is not the only success story. Ring claims that Search Party has facilitated the reunion of more than one lost dog per day, including pets like Xochitl in Houston, Truffle in Bakersfield, Lainey in Surprise, Zola in Ellenwood, Toby in Las Vegas, Blu in Erlanger, Zeus in Chicago, and Coco in Stockton, with more reunions occurring daily.

Search Party remains an optional feature that users can enable or disable at any time within the Ring app. Alongside this expansion, Ring has committed $1 million to equip animal shelters with camera systems, aiming to support up to 4,000 shelters across the United States. By integrating shelters into the network, Ring hopes to facilitate faster reconnections between dogs picked up by shelters and their owners. The company is also collaborating with organizations like Petco Love and Best Friends Animal Society and is open to additional partnerships.

Despite its benefits, the launch of Search Party last fall faced some criticism, particularly regarding privacy concerns and Ring’s connections to law enforcement. Ring maintains that participation is voluntary and that sharing footage is optional. However, the feature is enabled by default for compatible outdoor cameras, which has raised eyebrows. Nevertheless, the company appears confident in its offering and is actively promoting Search Party, even featuring it in a Super Bowl commercial.

Search Party taps into a familiar concept of neighbors helping one another during a challenging time. By making this feature available to everyone, Ring has removed a significant barrier, increasing the likelihood of quick reunions. Whether this tool becomes a community staple or ignites further privacy discussions will depend on how it is utilized by the public.

Would you be comfortable with neighborhood cameras assisting in the search for your lost dog, or does that raise concerns about surveillance? Share your thoughts with us at Cyberguy.com.

According to Fox News, the Search Party feature represents a significant advancement in community-driven pet recovery efforts.

SoundCloud Data Breach Affects Nearly 30 Million User Accounts

SoundCloud has confirmed a data breach affecting approximately 29.8 million user accounts, exposing email addresses and profile information to hackers and leaving many users unable to access their accounts.

SoundCloud, one of the world’s largest audio platforms, has reported a significant data breach that has compromised the personal and contact information of approximately 29.8 million users. This incident has left many affected users locked out of their accounts, encountering error messages when attempting to log in.

Founded in 2007, SoundCloud has grown into a prominent service for artists, hosting over 400 million tracks from more than 40 million creators. The scale of this breach raises serious concerns about user security. The company detected unauthorized activity linked to an internal service dashboard, prompting the initiation of its incident response process. Users began experiencing 403 Forbidden errors, particularly when connecting through virtual private networks (VPNs).

Initially, SoundCloud stated that the attackers accessed limited data and did not compromise passwords or financial information. The company claimed that the exposed information consisted of data that users had already made public on their profiles. However, subsequent disclosures revealed a more alarming situation.

According to the data breach notification service Have I Been Pwned, the attackers managed to harvest data from around 29.8 million accounts. Although no passwords were taken, the exposure of email addresses linked to public profiles poses a significant risk. This combination can facilitate phishing attempts, impersonation, and targeted scams.

Security researchers have linked the breach to ShinyHunters, a notorious extortion gang. Sources informed BleepingComputer that the group attempted to extort SoundCloud following the breach. SoundCloud confirmed these claims, stating that attackers made demands and launched email-flooding campaigns aimed at harassing users, employees, and partners. ShinyHunters has also claimed responsibility for recent voice phishing attacks targeting single sign-on systems at major companies such as Okta, Microsoft, and Google.

While the breach may seem less severe than those involving passwords or credit card information, this assumption can be misleading. Email addresses associated with real profiles enable scammers to craft convincing messages, posing as SoundCloud, brands, or even other creators. With access to follower counts and usernames, these messages can appear personal and credible. Once attackers gain the trust of their targets, they can push malicious links, malware, or fake login pages, often leading to larger account takeovers.

SoundCloud has not disclosed whether further details will be made available. The company confirmed the attack and the extortion attempt but has not responded to follow-up inquiries regarding the breach’s scope or its internal controls. For users, the long-term risk lies in how widely this dataset may spread. Once exposed, data rarely disappears and can circulate across forums, marketplaces, and scam networks for years.

In response to the breach, a SoundCloud representative stated, “We are aware that a threat actor group has published data online allegedly taken from our organization. Please know that our security team—supported by leading third-party cybersecurity experts—is actively reviewing the claim and published data.” The company has reiterated that it has found no evidence of sensitive data, such as passwords or financial information, being accessed.

For those with SoundCloud accounts, it is crucial to take immediate action. Even limited data exposure can lead to targeted scams if ignored. Users should be vigilant and monitor their inboxes for messages related to SoundCloud, music uploads, copyright issues, or account warnings. It is advisable not to click on links or open attachments from unexpected emails. When in doubt, users should visit the official website directly instead of using email links. Additionally, employing strong antivirus software can provide an extra layer of protection.

While passwords were not exposed, changing them is still a prudent measure. Users should create new passwords that are unique and not reused across other platforms. For those who struggle to remember passwords, utilizing a password manager can help generate and securely store strong passwords, thereby reducing the risk of reuse.

Furthermore, users should check if their email addresses have been involved in past breaches. Many password managers include built-in breach scanners that can alert users if their email addresses or passwords have appeared in known leaks. If a match is found, it is essential to change any reused passwords and secure those accounts with new, unique credentials.

Implementing two-factor authentication (2FA) adds an important security layer in case someone attempts to access an account. Even if attackers manage to guess or obtain a password, they will still require a second verification step. Users should enable 2FA wherever SoundCloud or connected services offer it.

After most breaches, attackers often use exposed email addresses to test logins across various streaming services, social media, and shopping accounts. Users should be on the lookout for password reset emails they did not request or login alerts from unfamiliar locations. If anything seems suspicious, it is vital to act quickly.

The SoundCloud breach serves as a reminder that data breaches can have far-reaching consequences, even when the exposed information appears harmless. Public profile data combined with private contact details creates real exposure. Staying alert, limiting data sharing, and adopting strong security practices remain the best defenses as data breaches continue to escalate.

For further information and updates on this situation, users are encouraged to stay informed and proactive in protecting their online presence, especially in light of the evolving landscape of cyber threats. According to Have I Been Pwned, vigilance is key in safeguarding personal information.

Newly Discovered Asteroid Identified as Tesla Roadster in Space

Astronomers recently misidentified a Tesla Roadster launched into space by SpaceX in 2018 as an asteroid, prompting a swift correction from the Minor Planet Center.

Astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics in Massachusetts recently made an amusing error when they mistook a Tesla Roadster for an asteroid. This incident occurred earlier this month, nearly seven years after the car was launched into orbit by SpaceX CEO Elon Musk.

The object, initially designated as 2018 CN41, was registered by the Minor Planet Center but was deleted from the registry just one day later on January 3. The center clarified that the object’s orbit matched that of an artificial object, specifically the Falcon Heavy upper stage with the Tesla Roadster attached. In a statement on their website, they noted, “The designation 2018 CN41 is being deleted and will be listed as omitted.”

The Tesla Roadster was launched during the maiden flight of SpaceX’s Falcon Heavy rocket in February 2018. At the time, it was expected to enter an elliptical orbit around the sun, extending just beyond Mars before looping back toward Earth. However, Musk later indicated that the vehicle exceeded Mars’ orbit and continued on toward the asteroid belt.

When the Roadster was misidentified as an asteroid earlier this month, it was located less than 150,000 miles from Earth—closer than the moon’s orbit. This proximity raised concerns among astronomers, who felt it necessary to monitor the object closely.

Jonathan McDowell, an astrophysicist at the Center for Astrophysics, commented on the implications of this mix-up. He pointed out the challenges associated with untracked objects in space, stating, “Worst case, you spend a billion launching a space probe to study an asteroid and only realize it’s not an asteroid when you get there,” highlighting the potential risks of misidentification.

The incident serves as a reminder of the complexities involved in tracking artificial objects in space, especially as more private companies like SpaceX continue to launch vehicles into orbit.

Fox News Digital has reached out to SpaceX for further comment regarding the incident.

According to Astronomy Magazine, the mix-up illustrates the ongoing challenges in space observation and the importance of accurate tracking systems as the number of objects in orbit continues to grow.

Qualcomm Completes 2 nm Chip Design at Indian Centers

Qualcomm Technologies has achieved a significant milestone by completing the tape-out of its 2nm semiconductor design, showcasing India’s growing role in advanced chip design.

Qualcomm Technologies, a leading American chipmaker, has announced the successful tape-out of its 2nm semiconductor design. This achievement marks a pivotal moment in advanced chip design and highlights India’s rapidly expanding semiconductor ecosystem.

The breakthrough was developed with substantial contributions from Qualcomm’s engineering centers located in Bengaluru, Chennai, and Hyderabad. This accomplishment reinforces India’s emerging status as a global hub for cutting-edge chip design.

According to Qualcomm, this milestone reflects the depth of its engineering presence in India, which has become the company’s largest engineering footprint outside the United States. The achievement underscores India’s expanding role in advanced semiconductor innovation.

The milestone was showcased at Qualcomm’s facility in Bengaluru during a visit from Ashwini Vaishnaw, the Indian Minister for Railways, Information and Broadcasting, and Electronics and IT. Vaishnaw remarked that “India is increasingly at the center of how advanced semiconductor technologies are being designed for the future.” He described the development as a testament to the growing maturity of the country’s design ecosystem and its ambition to establish a globally competitive semiconductor industry.

Qualcomm has invested in India for over two decades, building extensive capabilities in wireless technology, computing, artificial intelligence, and system-level engineering. The company’s teams in India contribute to various aspects of design implementation, validation, AI optimization, and system integration, supporting global architecture and platforms that power billions of devices worldwide.

Amitesh Kumar Sinha, Additional Secretary at the Ministry of Electronics and IT and CEO of the India Semiconductor Mission, stated, “India’s Semiconductor Mission is progressing with strong momentum, supported by a strengthening design ecosystem and sustained industry participation.” He emphasized that investments in advanced engineering and research and development capabilities are crucial for building long-term semiconductor capacity in the country.

Sinha further noted that Qualcomm’s long-term commitment to India reflects the growing depth of the country’s semiconductor design ecosystem and contributes to India’s broader ambition of becoming a globally competitive hub for semiconductor innovation.

Srini Maddali, Senior Vice President of Engineering at Qualcomm India, described the 2nm tape-out as a validation of the engineering talent available in the country. “Working closely with global program and architecture teams on advanced semiconductor design requires the very best talent, and our India teams consistently deliver at a global standard,” he said.

Qualcomm’s research and development centers in India now contribute across multiple layers of system design, from architecture to software platforms and AI-driven use-case optimization. This is particularly critical in an era characterized by intelligent and connected systems.

The successful tape-out of the 2nm chip design comes at a time when India is intensifying its efforts to position itself as a global semiconductor hub. This initiative is supported by policy measures, ecosystem incentives, and industry partnerships. Qualcomm’s latest milestone adds momentum to this push, signaling that India is not just assembling chips for the world but is increasingly involved in designing the future of semiconductor technology.

Qualcomm, headquartered in San Diego, has maintained a presence in India for over 20 years, during which it has developed one of its largest engineering capabilities outside the United States. This long-standing investment underscores the company’s commitment to fostering innovation and development within India’s semiconductor landscape.

The developments at Qualcomm highlight the potential for India to become a key player in the global semiconductor industry, as the nation continues to build its capabilities and attract investment.

According to The American Bazaar, Qualcomm’s advancements are a significant step forward for India’s semiconductor ambitions.

Fox News AI Newsletter Claims Misinformation About Artificial Intelligence

The Fox News AI Newsletter highlights concerns over misinformation regarding artificial intelligence, job displacement, and the implications of AI on society and the economy.

The Fox News AI Newsletter provides readers with the latest advancements in artificial intelligence technology, exploring both the challenges and opportunities that AI presents in today’s world.

In a recent op-ed, Shyam Sankar, the chief technology officer of Palantir Technologies, asserted that “the American people are being lied to about AI.” He emphasized that one of the most significant misconceptions is the belief that artificial intelligence will lead to widespread job displacement for American workers.

Elon Musk, the billionaire entrepreneur, has stirred controversy by suggesting that individuals should not prioritize retirement savings due to the transformative potential of AI. Musk claims that advancements in artificial intelligence could render traditional savings strategies obsolete within the next decade or two. However, this perspective has raised eyebrows among financial experts.

Amid rising concerns about the economic impact of AI, Chevron CEO Mike Wirth outlined the company’s strategy to leverage U.S. natural resources to meet the increasing power demands of AI technologies. Wirth assured consumers that the company aims to absorb these costs rather than passing them on to customers, which is particularly important as electricity prices have surged in recent years.

Data centers and AI technologies have been linked to escalating electricity costs across the United States. According to reports, American consumers faced a staggering 42% increase in home power costs compared to a decade ago, raising questions about the sustainability of such growth.

As the implementation of AI technology accelerates, recent polling indicates that many voters believe the integration of AI into society is progressing too quickly. Additionally, there is widespread skepticism regarding the federal government’s ability to effectively regulate these emerging technologies.

Privacy concerns have also come to the forefront, particularly with the rise of popular mobile applications like Chat & Ask AI. This app, which boasts over 50 million users on platforms such as Google Play and the Apple App Store, has been criticized by independent security researchers for allegedly exposing hundreds of millions of private chatbot conversations online.

In a more optimistic tone, executives at Alphabet, Google’s parent company, expressed confidence during a recent post-earnings call. They indicated that the company’s substantial investments in artificial intelligence are beginning to yield tangible revenue growth across various sectors of the business.

Sankar further elaborated on the potential of AI in the workplace, describing it as a “massively meritocratic force.” He offered insights to corporate leaders on how to strategically position their companies and employees to thrive in an AI-driven environment.

In a cautionary tale, a woman named Abigail fell victim to a sophisticated scam, believing she was in a romantic relationship with a well-known actor. The messages, voice, and video appeared authentic, leading her to lose over $81,000 and her paid-off home, which she had intended to use for retirement.

As discussions surrounding artificial intelligence continue to evolve, it is crucial for individuals and organizations to remain informed about the implications of these technologies on society and the economy. For ongoing updates and insights into AI advancements, readers can turn to Fox News.

According to Fox News, the conversation around AI is just beginning, and understanding its impact will be essential for navigating the future.

Mars’ Red Color Linked to Potentially Habitable Past, Study Finds

Mars’ reddish hue may be linked to a mineral called ferrihydrite, suggesting the planet had a habitable environment capable of sustaining liquid water in its ancient past, according to a new study.

A recent study has revealed that the distinctive red color of Mars is primarily due to a mineral known as ferrihydrite, which forms in the presence of cool water. This finding challenges previous assumptions that hematite was the main contributor to the planet’s iconic hue.

Ferrihydrite is unique in that it forms at lower temperatures than other minerals found on Mars, indicating that the planet may have once had conditions suitable for liquid water before transitioning to its current dry state billions of years ago. NASA highlighted this potential in a news release this week, noting that the agency partially funded the study.

The research, published in the journal Nature Communications, involved an analysis of data collected from various Mars missions, including those conducted by several rovers. The team compared this data to laboratory experiments designed to simulate Martian conditions, where they tested how light interacts with ferrihydrite particles and other minerals.

Adam Valantinas, the study’s lead author and a postdoctoral fellow at Brown University, explained the historical context of the research. “The fundamental question of why Mars is red has been considered for hundreds, if not thousands, of years,” he stated. Valantinas, who began this research as a Ph.D. student at the University of Bern in Switzerland, emphasized the significance of their findings. “From our analysis, we believe ferrihydrite is present in the dust and likely in the rock formations as well,” he added.

While ferrihydrite’s role in Mars’ coloration has been suggested before, this study provides a more robust framework for testing the hypothesis using both observational data and innovative laboratory techniques that replicate Martian dust.

Jack Mustard, the senior author of the study and a professor at Brown University, described the research as a “door-opening opportunity.” He noted the importance of the ongoing sample collection by the Perseverance rover, stating, “When we get those back, we can actually check and see if this is right.” Mustard’s comments underline the potential for future discoveries regarding Mars’ geological history.

The study suggests that Mars may have once had a cool, wet climate that could have supported life. Although the planet’s current atmosphere is too cold to sustain life, evidence indicates that it once had an abundance of water, as reflected in the presence of ferrihydrite in its dust.

Geronimo Villanueva, Associate Director for Strategic Science at NASA’s Goddard Space Flight Center and a co-author of the study, remarked on the implications of the findings. “These new discoveries point to a potentially habitable past for Mars and highlight the value of coordinated research between NASA and its international partners when exploring fundamental questions about our solar system and the future of space exploration,” he said.

Valantinas further elaborated on the research objectives, stating, “What we want to understand is the ancient Martian climate and the chemical processes on Mars—not only ancient but also present.” He also addressed the habitability question, asking, “Was there ever life?” To answer this, researchers need to understand the conditions that existed during the formation of ferrihydrite.

According to Valantinas, the formation of ferrihydrite requires specific conditions where oxygen from the atmosphere or other sources interacts with iron in the presence of water. These conditions were markedly different from today’s dry and cold environment. As Martian winds spread the dust across the planet, they contributed to Mars’ iconic red appearance.

As research continues, the findings from this study may reshape our understanding of Mars’ geological history and its potential to have supported life in the past, paving the way for future exploration and discovery.

According to NASA, the implications of this research extend beyond just understanding Mars’ color; they may also provide insights into the planet’s capacity to host life in its ancient past.

European Union Alleges TikTok Violates Technology Laws with Addictive Features

The European Union has formally accused TikTok of violating technology laws by employing addictive design features that may harm users, particularly minors, as part of a broader regulatory crackdown on social media platforms.

The European Commission has issued preliminary findings alleging that TikTok’s platform design deliberately fosters addictive behavior among its European user base.

On Friday, the European Union escalated its regulatory scrutiny of the social media landscape by formally accusing TikTok of violating the bloc’s landmark technology laws. The European Commission, the EU’s executive arm, claims that the platform employs specific “addictive design” features that may compromise the mental and physical well-being of its users, particularly minors. This move signifies a significant escalation in the ongoing tension between Brussels and major technology firms regarding the long-term societal impacts of digital consumption.

Central to the Commission’s allegations are several hallmark features of the TikTok user experience, including the infinite scroll mechanism, default autoplay settings, and frequent push notifications. The investigation also focuses on the platform’s highly personalized recommender system, which regulators argue creates a “rabbit hole” effect that can be difficult for users to escape. The EU contends that these tools were designed to maximize engagement at the expense of user health, creating a feedback loop that constitutes a breach of the Digital Services Act.

Under the Digital Services Act, large online platforms are legally required to assess and mitigate systemic risks associated with their services. The European Commission asserts that TikTok failed to conduct a sufficiently rigorous assessment of how its design choices impact the psychological development of its younger demographic. Furthermore, the findings suggest that TikTok’s existing safety measures, such as parental controls and screen-time management tools, are insufficient to counteract the compulsiveness inherent in the platform’s primary interface.

Henna Virkkunen, the European Commission’s Executive Vice President for Tech Sovereignty, Security, and Democracy, emphasized the gravity of the situation in a public statement. She noted that social media addiction can have profound and detrimental effects on the developing minds of children and teenagers, leading to issues ranging from sleep deprivation to increased anxiety. Virkkunen asserted that the Digital Services Act was specifically designed to hold platforms accountable for these outcomes, reinforcing Europe’s commitment to protecting its citizens from digital harms.

In response to the allegations, TikTok has firmly denied the Commission’s findings, characterizing them as a fundamental misunderstanding of its platform. A spokesperson for the company stated that the EU’s depiction of TikTok is categorically false and meritless. TikTok has vowed to challenge the findings through all available legal channels, maintaining that it has consistently invested in safety features and transparency measures to support its community in Europe and beyond.

This legal friction follows a previous encounter between TikTok and EU regulators. In October, the company was found in violation of the Digital Services Act for failing to provide independent researchers with adequate access to public data. While TikTok managed to avoid a significant financial penalty in that instance by agreeing to a series of transparency commitments in December, this latest accusation regarding addictive design represents a more fundamental challenge to its core business model and user experience design.

The European Union’s move aligns with a growing global trend of litigation and regulation targeting the design architecture of social media apps. Recently, TikTok reached a settlement in a separate case where it was accused, alongside several other major tech firms, of intentionally designing its platform to foster addiction in children. Snap, the parent company of Snapchat, also reached a settlement shortly before its case was scheduled to go to trial, reflecting a shift in how these companies approach legal liability regarding user health.

The broader legal battle continues to unfold in courtrooms elsewhere. A high-profile trial involving Meta and YouTube proceeded last week after those companies chose not to settle. These cases are being closely monitored by regulators and industry analysts alike, as they could set a significant precedent for how the concept of “addictive design” is defined and regulated under modern consumer protection laws. The outcome of the EU’s investigation could lead to substantial fines, potentially reaching up to six percent of a company’s global annual turnover under the Digital Services Act.

The Digital Services Act is part of a duo of comprehensive tech laws, alongside the Digital Markets Act, intended to curb the power of “gatekeeper” platforms and ensure a safer digital environment. By targeting the algorithmic and structural elements of TikTok, the EU is signaling that it will no longer accept a hands-off approach to platform moderation. This focus on “recommender systems” is particularly notable, as these algorithms are the primary drivers of content discovery and user retention for modern social media companies.

Critics of the tech industry have long argued that the design choices mentioned by the Commission—such as the lack of a natural stopping point in an infinite scroll—are not accidental but are intentional psychological triggers. The EU’s investigation will now move into a more formal phase, where TikTok will have the opportunity to present evidence in its defense. However, the preliminary nature of these findings suggests that the Commission is confident in its initial assessment that the platform’s current safeguards are inadequate for the scale of the risk.

Beyond the legal implications, the investigation highlights a deepening divide between the regulatory philosophies of Europe and the United States. While the U.S. has seen various state-level efforts and individual lawsuits against tech giants, the EU’s centralized enforcement of the Digital Services Act provides a unified regulatory front that is unique in its reach and authority. This centralized approach allows the Commission to act as a singular watchdog for hundreds of millions of users, putting immense pressure on global companies to harmonize their safety standards with European law.

As the case progresses, the tech industry will be looking for clarity on what constitutes a “safe” design. If features like autoplay and personalized feeds are deemed inherently harmful by European regulators, it may force a total redesign of many popular applications. For TikTok, which relies heavily on its proprietary algorithm to maintain its competitive edge, the stakes could not be higher. The company must now prove that its engagement metrics do not come at the cost of the digital health of its most vulnerable users.

The timeline for a final decision remains uncertain, but the European Commission has signaled that it intends to move swiftly. Given the public nature of the accusations and the high-profile statements from EU leadership, it is clear that Brussels views this case as a landmark opportunity to define the boundaries of platform responsibility in the twenty-first century. For now, the tech world remains in a state of high alert as the definition of digital safety continues to be rewritten in the halls of European governance.

According to GlobalNetNews.

Tech Layoffs in 2026: A Comprehensive Overview

Tech layoffs continue to pose significant challenges in early 2026, following a tumultuous year for the industry in 2025.

The tech industry is grappling with ongoing layoffs as 2026 unfolds, echoing the difficulties faced in the previous year. In 2025, mass layoffs raised concerns about job security and the overall health of the job market, particularly amid increasing automation and the growing use of artificial intelligence. As the new year begins, major companies are continuing to announce job cuts, signaling that the trend is far from over.

Amazon has been at the forefront of these layoffs, cutting approximately 16,000 jobs in January, followed by an additional 2,200 in early February. These reductions are part of CEO Andy Jassy’s strategic initiative to streamline operations, reduce bureaucracy, and divest from underperforming business segments. Since October 2025, Amazon’s layoffs have totaled around 18,200 positions.

Ericsson, the telecommunications giant, has also announced plans to eliminate 1,600 jobs in Sweden. This decision is part of the company’s ongoing cost-saving measures aimed at navigating a prolonged downturn in telecom spending. Ericsson’s commitment to these measures underscores the challenges faced by the industry as it adapts to changing market conditions.

Chipmaking company ASML is set to cut around 1,700 jobs across the Netherlands and the United States. The layoffs are intended to bolster the company’s focus on engineering and innovation, with the majority of cuts affecting leadership roles within its technology and IT teams.

Meta, the parent company of Facebook, has laid off 1,500 employees as part of a restructuring of its Reality Labs division. This move comes as Meta shifts its investment focus from the Metaverse to wearable technology, following disappointing traction in the Metaverse space.

Autodesk, known for its design software, has announced it will reduce its global workforce by approximately 1,000 jobs, representing about 7% of its total employees. The company aims to redirect its spending towards its cloud platform and artificial intelligence initiatives, with the majority of job cuts affecting customer-facing sales teams.

Pinterest is also restructuring, planning to lay off nearly 15% of its workforce. This decision aligns with the company’s strategy to allocate more resources towards artificial intelligence, as it seeks to support transformation initiatives and prioritize AI-driven products.

Sapiens, a software provider, has revealed plans to cut hundreds of jobs, with the most significant impacts expected in India and the United States. Reports suggest that approximately 540 employees will be affected, although the distribution of layoffs will not be uniform across regions.

Additionally, Oracle is reportedly considering laying off around 30,000 employees and selling its health tech unit, Cerner, according to analysts at TD Cowen. While the full extent of the layoffs remains uncertain, the early announcements in 2026 indicate a challenging year ahead for tech employees.

As these companies navigate their respective challenges, the ongoing trend of layoffs raises questions about the future of employment in the tech sector. The impact of automation and artificial intelligence continues to reshape the landscape, leaving many employees uncertain about their job security.

According to The American Bazaar, the developments in the tech industry signal a need for adaptability and resilience among workers as they face an evolving job market.

OpenAI Experiences Senior Leadership Departures Amid ChatGPT Expansion

OpenAI is experiencing a significant turnover among its senior leadership as CEO Sam Altman reallocates resources to enhance ChatGPT, sidelining long-term research initiatives.

OpenAI has recently witnessed a wave of senior-level departures following CEO Sam Altman’s directive to prioritize resources for ChatGPT, according to a report by the Financial Times. This strategic shift has redirected computing power and personnel away from experimental projects, leading to high-profile exits within the organization.

Among those who have left is Jerry Tworek, the vice president of research, who departed in January after spending seven years at OpenAI. Tworek had been advocating for increased resources for his work on AI reasoning and continuous learning—the capability of models to assimilate new information without losing previously acquired knowledge. His efforts reportedly culminated in a standoff with chief scientist Jakub Pachocki, who favored focusing on OpenAI’s existing architecture around large language models, which he deemed more promising.

The departures follow Altman’s issuance of an internal “code red” in December 2025, during which he emphasized the urgent need for improvements in ChatGPT’s speed, personalization, and reliability. This memo effectively shelved initiatives related to advertising, AI shopping agents, and a personal assistant project known as Pulse. The code red was prompted by the emergence of Google’s Gemini 3, which surpassed OpenAI in key performance benchmarks, resulting in a surge in Alphabet’s stock value.

At OpenAI, researchers are required to apply for computing “credits” from top executives to initiate their projects. According to ten current and former employees who spoke with the Financial Times, those working on projects outside of large language models have increasingly found their requests either denied or granted insufficient resources to effectively pursue their research.

Teams responsible for projects like the video generator Sora and the image tool DALL-E have expressed feelings of neglect, as their work has been deemed less critical to the ChatGPT initiative. One senior employee remarked that they “always felt like a second-class citizen” compared to the primary focus areas. Over the past year, several projects unrelated to language models have been quietly phased out.

In January, Andrea Vallone, who led model policy research, joined competitor Anthropic after being assigned what she described as an “impossible” task—ensuring the mental well-being of users who were becoming emotionally attached to ChatGPT.

OpenAI’s pivot towards ChatGPT comes amid intensifying competition in the AI landscape. Google’s Gemini now boasts 650 million monthly users, a significant increase from 450 million in July 2025. Additionally, Anthropic has captured 40% of the enterprise market share, compared to OpenAI’s 27%, according to data from Menlo Ventures. Chief Research Officer Mark Chen has stated that foundational research “remains central” to OpenAI’s mission and still accounts for the majority of the company’s computing resources. However, many researchers feel that the current focus on optimizing a chatbot diverges from their original intentions for joining the organization.

The ongoing shifts at OpenAI highlight the challenges faced by the company as it navigates the competitive landscape of artificial intelligence, balancing immediate product demands with long-term research goals.

These developments underscore the complexities of innovation in a rapidly evolving field, where the pressure to deliver results can sometimes overshadow foundational research efforts.

According to the Financial Times, the implications of these changes could have lasting effects on OpenAI’s research capabilities and overall direction.

Microsoft’s Recent Actions Raise Unexpected Privacy Concerns

Microsoft’s provision of BitLocker encryption keys to law enforcement has raised significant concerns about digital privacy and the implications of encrypted data accessibility.

For years, encryption has been heralded as the gold standard for digital privacy, promising to safeguard data from hackers, corporations, and government entities alike. However, recent developments have cast doubt on this assumption. In a federal investigation related to alleged COVID-19 unemployment fraud in Guam, Microsoft confirmed it provided law enforcement with BitLocker recovery keys, enabling investigators to unlock encrypted data on several laptops.

This incident marks one of the clearest public examples of Microsoft complying with law enforcement requests for BitLocker recovery keys during a criminal investigation. While the warrant may have been lawful, the implications extend far beyond this single case. For many Americans, this situation serves as a stark reminder that “encrypted” does not always equate to “inaccessible.”

Federal investigators believed that three Windows laptops contained evidence linked to an alleged scheme involving pandemic unemployment funds. These devices were secured with BitLocker, Microsoft’s built-in disk encryption tool that is enabled by default on many modern Windows PCs. BitLocker encrypts all data on a hard drive, rendering it unreadable without a recovery key. Users can choose to store this key themselves, but Microsoft encourages backing it up to a Microsoft account for convenience. In this instance, that convenience proved significant. Upon receiving a valid search warrant, Microsoft provided the recovery keys to investigators, granting them full access to the data on the devices.

According to Microsoft, the company receives approximately 20 such requests annually and can only comply when users have opted to store their keys in the cloud. Attempts to reach Microsoft for further comment were unsuccessful before the article’s deadline.

John Ackerly, CEO and co-founder of Virtru and a former White House technology advisor, emphasizes that the issue lies not with encryption itself but with who controls the keys. He explains that the convenience of backing up BitLocker recovery keys to a Microsoft account means that Microsoft retains the technical ability to unlock a customer’s device. “When a third party holds both encrypted data and the keys required to decrypt it, control is no longer exclusive,” Ackerly states.

He warns that once a provider has the capability to unlock data, that power rarely remains theoretical. “When systems are built so that providers can be compelled to unlock customer data, lawful access becomes a standing feature. It is important to remember that encryption does not distinguish between authorized and unauthorized access,” he adds. “Any system designed to be unlocked on demand will eventually be unlocked by unintended parties.”

Ackerly points out that this outcome is not inevitable. Other technology companies have made different architectural choices. For instance, Apple has designed systems that limit its ability to access customer data, even when complying with government requests. Google offers client-side encryption models that allow users to retain exclusive control of their encryption keys. These companies comply with the law, but since they do not hold the keys, they cannot unlock the data. This distinction is crucial.

He believes Microsoft has the opportunity to change its approach. “Microsoft could address this by making customer-controlled keys the default and by designing recovery mechanisms that do not place decryption authority in Microsoft’s hands,” Ackerly suggests. “True personal data sovereignty requires systems that make compelled access technically impossible, not merely contractually discouraged.” In essence, Microsoft’s ability to comply with the warrant stemmed from a single design decision that transformed encrypted data into accessible data.

A Microsoft spokesperson stated, “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s consumer cloud services. We recognize that some customers prefer Microsoft’s cloud storage, so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

This case has reignited a longstanding debate over lawful access versus systemic risk. Ackerly warns that centralized control has a troubling history. “We have seen the consequences of this design pattern for more than two decades,” he says. “From the Equifax breach, which exposed the financial identities of nearly half the U.S. population, to repeated leaks of sensitive communications and health data during the COVID era, the pattern is consistent: centralized systems that retain control over customer data become systemic points of failure. These incidents are not anomalies; they reflect a persistent architectural flaw.”

When companies hold the keys, they become targets for hackers, foreign governments, and legal demands from agencies like the FBI. Once a capability exists, it is rarely left unused. Apple has implemented systems, such as Advanced Data Protection, that prevent it from accessing certain encrypted user data, even when faced with government requests. Google also offers client-side encryption for some services, primarily in enterprise environments, where encryption keys remain under the customer’s control. This distinction is vital, as encryption experts often note: you cannot hand over what you do not have.

While personal privacy is not entirely lost, it now requires intentionality. Small choices can have significant implications. Ackerly emphasizes the importance of understanding control: “If you don’t control your encryption keys, you don’t fully control your data.” This control begins with knowing where your keys are stored. If they are kept in the cloud with your provider, your data may be accessible without your knowledge.

Once keys are outside your control, access becomes possible without your consent. Therefore, the manner in which data is encrypted is just as important as whether it is encrypted. Consumers should seek tools and services that encrypt data before it reaches the cloud, ensuring that providers cannot access it. Defaults often favor convenience, and many users do not change them. “Users should also look to avoid default settings designed for convenience,” Ackerly advises. “When convenience is the default, most individuals will unknowingly trade control for ease of use.”

When encryption is designed so that even the provider cannot access the data, the balance shifts back to the individual. “When data is encrypted in a way that even the provider can’t access, it stays private — even if a third party comes asking,” Ackerly states. “By holding your own encryption keys, you’re eliminating the possibility of the provider sharing your data.” He concludes with a straightforward lesson: “You cannot outsource responsibility for your sensitive data and assume that third parties will always act in your best interest. Encryption only fulfills its purpose when the data owner is the sole party capable of unlocking it.”

Microsoft’s decision to comply with the BitLocker warrant may have been legal, but it raises critical questions about modern encryption. Privacy relies less on mathematical algorithms and more on how systems are constructed. When companies hold the keys, the risk shifts to the users.

As individuals navigate this landscape, they must consider whether they trust tech companies to protect their encrypted data or if they believe that responsibility should rest solely with them. Understanding the implications of encryption and key management is essential for safeguarding personal privacy in an increasingly interconnected world.

According to CyberGuy, the choices users make regarding encryption and key management can significantly impact their digital privacy.

Private Lunar Lander Blue Ghost Successfully Lands on the Moon

A private lunar lander, Blue Ghost, successfully landed on the moon on Sunday, delivering equipment for NASA and marking a significant milestone for commercial space exploration.

A private lunar lander carrying essential equipment for NASA successfully touched down on the moon on Sunday. The landing was confirmed by the company’s Mission Control team, based in Texas.

Firefly Aerospace’s Blue Ghost lander made its descent from lunar orbit using autopilot technology, targeting the slopes of an ancient volcanic dome located in an impact basin on the moon’s northeastern edge. The successful landing was a significant achievement in the growing field of commercial lunar exploration.

Will Coogan, Firefly’s chief engineer for the lander, expressed excitement upon confirmation of the landing, stating, “You all stuck the landing. We’re on the moon.” This upright and stable landing positions Firefly as the first private company to successfully deliver a spacecraft to the moon without crashing or tipping over, a feat that has eluded some government space programs in the past. Historically, only five countries—Russia, the United States, China, India, and Japan—have achieved successful lunar landings.

The Blue Ghost lander, named after a rare species of firefly found in the United States, stands 6 feet 6 inches tall and spans 11 feet wide, providing enhanced stability during its lunar operations. Approximately half an hour after landing, the Blue Ghost began transmitting images from the lunar surface, with its first picture being a selfie, albeit partially obscured by the sun’s glare.

Looking ahead, two other companies are preparing to launch their lunar missions, with the next lander expected to join Blue Ghost on the moon later this week. This surge in private lunar exploration reflects a broader trend of increasing commercial interest in space, paving the way for future astronaut missions and scientific research on the moon.

According to The Associated Press, the successful landing of Blue Ghost marks a pivotal moment for Firefly Aerospace and the burgeoning commercial space industry.

Satyajayant Misra Appointed Co-Chair of Tokyo INFOCOM 2026 Committee

An Indian American professor has been appointed co-chair of the Technical Program Committee for the prestigious IEEE INFOCOM 2026 conference in Tokyo.

Satyajayant “Jay” Misra, an Indian American professor and associate dean of research at the New Mexico State University College of Engineering, has been appointed as the Technical Program Committee co-chair for the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Computer Communications 2026. This conference is recognized as one of the most prestigious events in the field of computer networking and communications.

Misra will co-chair the event alongside Professor Tian Lan from George Washington University. The IEEE INFOCOM conference serves as a premier international forum for presenting advances in computer communications, drawing leading researchers, industry experts, and academics from around the globe.

Scheduled to take place from May 18 to May 21, 2026, in Tokyo, Japan, the conference will feature a variety of activities, including keynote addresses, technical paper presentations, panels, workshops, tutorials, poster sessions, and programming aimed at students. This event continues a tradition that spans over four decades, dedicated to advancing the state of the art in networking research.

“INFOCOM continues to be one of the selective conferences for which networking and cybersecurity researchers work for a year or more to submit a high-quality paper,” Misra stated. “When I was a student, it was my dream to get a paper into INFOCOM any given year. It continues to be a high-impact venue. INFOCOM 2026 will bring researchers from all continents to spend four days in Tokyo, presenting and discussing cutting-edge research ideas.”

As co-chair of the Technical Program Committee, Misra will oversee the highly selective peer-review process, which involves more than 400 researchers from around the world. His responsibilities include building the technical program and ensuring the overall quality and impact of the research presented at the conference.

This role is considered one of the highest forms of professional service in the field, typically reserved for researchers who have made significant and sustained contributions. Misra joins a distinguished lineage of technical leaders associated with IEEE INFOCOM.

David Jáuregui, interim dean of the NMSU College of Engineering, remarked on Misra’s appointment, stating, “Dr. Misra’s appointment as Technical Program Committee co-chair of IEEE INFOCOM 2026 is a significant achievement. Serving in this role places NMSU alongside leading research institutions from around the world, underscoring the growing international visibility of our research efforts. It reflects not only Dr. Misra’s sustained scholarly leadership but also NMSU’s expanding contributions to advancing research in computer science, engineering, and emerging technologies on the global stage.”

For INFOCOM 2026, nearly 1,800 research papers were submitted from institutions worldwide, with approximately 330 papers accepted for presentation. Misra noted that this reflects the competitive nature and high standards for scholarly excellence associated with the conference.

“This year we had an increase of more than 20 percent in submitted papers, and this shows the growing interest in INFOCOM,” Misra explained. “The paper selection process is multi-level with significant oversight by seasoned researchers in the community, and it is rigorous and selective.”

The selection process lasts over five months and involves several rounds of anonymous interactions among reviewers for each paper. This culminates in a technical program committee meeting where borderline papers are adjudicated.

Misra’s role at INFOCOM 2026 highlights not only his personal achievements but also the increasing prominence of New Mexico State University in the global research community.

According to The American Bazaar, this appointment underscores the importance of collaboration and innovation in the rapidly evolving field of computer communications.

Waymo Faces Federal Investigation Following Child Struck by Vehicle

A Waymo autonomous vehicle struck a child near a Santa Monica school, leading to a federal investigation into the safety of self-driving cars in school zones.

Federal safety regulators are intensifying their scrutiny of self-driving cars following a serious incident involving Waymo, the autonomous vehicle company owned by Alphabet. The investigation focuses on a Waymo vehicle that struck a child near an elementary school in Santa Monica, California, during morning drop-off hours.

The crash occurred on January 23, raising immediate concerns about the behavior of autonomous vehicles in school zones and their ability to respond to unpredictable pedestrian movements. On January 29, the National Highway Traffic Safety Administration (NHTSA) confirmed it had opened a preliminary investigation into Waymo’s automated driving system.

According to documents released by the NHTSA, the incident took place within two blocks of the elementary school during peak drop-off times. The area was bustling with activity, including multiple children, a crossing guard, and several vehicles double-parked along the street.

Investigators reported that the child ran into the roadway from behind a double-parked SUV while heading toward the school. The Waymo vehicle struck the child, who sustained minor injuries. Notably, there was no safety operator inside the vehicle at the time of the incident.

The NHTSA’s Office of Defects Investigation is examining whether the autonomous system acted with appropriate caution given its proximity to a school zone and the presence of young pedestrians. The investigation will assess how Waymo’s automated driving system is designed to operate in and around school zones, particularly during busy pickup and drop-off times.

This includes evaluating whether the vehicle adhered to posted speed limits, how it responded to visual cues such as crossing guards and parked vehicles, and whether its post-crash response met federal safety standards. The agency is also reviewing Waymo’s actions following the incident.

Waymo stated that it voluntarily contacted regulators on the same day as the crash and expressed its commitment to cooperating fully with the investigation. In a statement, the company emphasized its dedication to improving road safety for both riders and other road users.

“At Waymo, we are committed to improving road safety, both for our riders and all those with whom we share the road,” the company said. “Part of that commitment is being transparent when incidents occur, which is why we are sharing details regarding an event in Santa Monica, California, on Friday, January 23, where one of our vehicles made contact with a young pedestrian.”

Waymo explained that the incident occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into the vehicle’s path. The Waymo technology detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made.

“To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph,” Waymo stated. “This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.”

Following the incident, the pedestrian stood up immediately, walked to the sidewalk, and 911 was called. The vehicle remained stopped, moved to the side of the road, and stayed there until law enforcement cleared it to leave the scene. Waymo emphasized that this event highlights the critical value of its safety systems.

Waymo vehicles are classified as Level 4 autonomy on the NHTSA’s six-level scale. At Level 4, the vehicle manages all driving tasks within specific service areas, and a human driver is not required to intervene. However, these systems do not operate everywhere and are currently limited to ride-hailing services in select cities.

The NHTSA has clarified that Level 4 vehicles are not available for consumer purchase, even though passengers may ride inside them. This latest investigation follows a previous NHTSA evaluation that began in May 2024, which examined reports of Waymo vehicles colliding with stationary objects like gates, chains, and parked cars. That investigation was closed in July 2025 after regulators reviewed the data and Waymo’s responses.

Safety advocates argue that the new incident underscores ongoing concerns regarding the operation of autonomous vehicles, particularly in sensitive environments like school zones. The investigation could influence how regulators establish expectations for autonomous driving systems near schools, playgrounds, and other areas with vulnerable pedestrians.

For parents, commuters, and riders, the outcome of this investigation may affect where and when autonomous vehicles are permitted to operate. The challenges posed by self-driving technology highlight the complexities of ensuring safety in scenarios involving human unpredictability, especially when children are involved.

Federal investigators now face a crucial question: Did the system act as cautiously as it should have in one of the most sensitive driving environments possible? The answer to this question could play a significant role in shaping the future of autonomous vehicle regulation in the United States.

For further insights, please refer to Fox News.

Athena Lunar Lander Reaches Moon; Condition Still Uncertain

Athena lunar lander successfully reached the moon, but mission controllers remain uncertain about its condition and exact landing location.

Mission controllers have confirmed that the Athena lunar lander successfully touched down on the moon earlier today. However, they are still uncertain about the spacecraft’s condition following its landing, according to the Associated Press.

The precise location of Athena’s landing remains unclear. The lander, which is operated by Intuitive Machines, was equipped with an ice drill, a drone, and two rovers. Despite the uncertainty surrounding its status, officials reported that Athena was able to establish communication with its controllers.

Tim Crain, mission director and co-founder of Intuitive Machines, was heard instructing his team to “keep working on the problem,” even as the craft sent apparent “acknowledgments” back to the team in Texas.

The live stream of the landing was concluded by NASA and Intuitive Machines, who announced plans to hold a news conference later today to provide updates on Athena’s status.

This event follows a significant milestone in lunar exploration, as Athena becomes the second craft to land on the moon this week. On Sunday, Firefly Aerospace’s Blue Ghost successfully made its landing, marking a historic achievement as the first private company to deploy a spacecraft on the moon without it crashing or tipping over. Will Coogan, chief engineer for Blue Ghost, celebrated the accomplishment, stating, “You all stuck the landing. We’re on the moon.”

Last year, Intuitive Machines faced challenges with its Odysseus lander, which landed sideways, adding pressure to the success of today’s mission. The outcomes of both Athena and Blue Ghost represent significant advancements in private lunar exploration.

As the situation develops, further details about Athena’s condition and mission objectives are anticipated during the upcoming news conference, according to the Associated Press.

Uber Appoints Indian-American Balaji Krishnamurthy as CFO Amid Expansion

Uber has appointed Balaji Krishnamurthy as its new CFO, marking a significant shift toward a driverless future and an aggressive expansion of its robotaxi services.

Uber Technologies Inc. has announced the appointment of Balaji Krishnamurthy as its next chief financial officer, effective February 16. This move signals a major strategic shift for the company, as it intensifies its focus on autonomous vehicle partnerships and the development of a driverless future.

Krishnamurthy, who has been a long-time advocate for self-driving technology within Uber, currently serves as the vice president of strategic finance and investor relations. He will succeed Prashanth Mahendra-Rajah, who is stepping down after 27 months in the role to pursue new opportunities. This leadership change was revealed alongside Uber’s fourth-quarter earnings report, emphasizing the company’s pivot from developing its own autonomous hardware to becoming a leading global platform for robotaxi services.

At 41 years old, Krishnamurthy has played a pivotal role in Uber’s “asset-light” strategy, which focuses on partnerships rather than ownership of autonomous vehicles. He has also served on the board of Waabi, an autonomous trucking startup in which Uber recently increased its investment.

“Balaji knows Uber’s business inside and out and is a brilliant, decisive strategist,” said CEO Dara Khosrowshahi. “I am thrilled for him to step up as CFO as we kick off another big year.”

The upcoming year is poised to be significant for Uber, which plans to facilitate autonomous trips in up to 15 cities worldwide by the end of 2026. This ambitious expansion relies heavily on strategic partnerships, including a notable collaboration with Alphabet’s Waymo to introduce robotaxis in Austin and Atlanta, as well as a joint effort with Lucid and Nuro to deploy custom-built autonomous electric vehicles.

During a recent call with investors, Krishnamurthy highlighted Uber’s robust cash flow, which has seen a 20% year-over-year revenue increase, reaching $14.37 billion. He stated that this financial strength would allow the company to “invest with discipline” in the autonomous vehicle sector.

“We are entering 2026 with strong momentum,” Krishnamurthy noted. “We will invest across a multitude of opportunities, including positioning Uber to win in an AV future.”

However, the transition comes at a challenging time for Uber’s stock. Following the announcement of Krishnamurthy’s appointment, shares fell approximately 6%, as investors reacted to a first-quarter profit outlook that fell short of Wall Street expectations. This conservative guidance is partly due to the capital-intensive nature of scaling autonomous infrastructure and the costs associated with integrating new AI-driven software.

Outgoing CFO Mahendra-Rajah leaves behind a legacy of financial stabilization, having played a key role in helping Uber achieve investment-grade status and launching the company’s first-ever share buyback program. He will remain with the company as a senior advisor until July 1 to ensure a smooth transition.

As Uber shifts from being primarily a ride-hailing app to a high-tech logistics coordinator, Krishnamurthy’s appointment underscores the company’s commitment to not just preparing for a driverless future but actively investing in it.

According to The American Bazaar, this strategic shift reflects Uber’s determination to lead in the evolving landscape of autonomous transportation.

U.S. DOE Appoints Indian-Americans to Key Advisory Positions

The U.S. Department of Energy has appointed three Indian-American scientists to its newly established advisory committee, emphasizing their expertise in energy and technology.

The U.S. Department of Energy (DOE) has appointed three Indian-American scientists to its newly formed Office of Science Advisory Committee (SCAC), which is tasked with shaping the future of U.S. science and technology policy.

The SCAC will provide independent guidance on research priorities, emerging technologies, and cross-cutting scientific challenges that impact the nation’s energy agenda. This initiative comes at a critical time when the U.S. government is emphasizing innovation in fields such as fusion energy, quantum computing, and artificial intelligence.

Among the 21 members appointed to the advisory panel are Supratik Guha, Suresh Garimella, and A.N. Sreeram. Each brings a wealth of expertise in materials science, engineering, and advanced manufacturing.

Supratik Guha is a professor at the University of Chicago’s Pritzker School of Molecular Engineering and a researcher at Argonne National Laboratory. He has dedicated much of his career to the intersection of nanoscience and applied technology. Guha previously led Argonne’s Center for Nanoscale Materials and spent two decades at IBM Research, focusing on nanoscale materials and devices.

Suresh Garimella serves as the president of the University of Arizona and is a trained mechanical engineer with extensive academic and advisory experience. He has been a member of the National Science Board, a presidentially appointed body that oversees the National Science Foundation. Additionally, Garimella has held advisory roles with Sandia National Laboratories and the U.S. State Department, focusing on scientific collaboration.

A.N. Sreeram is the senior vice president and chief technology officer at Dow, where he holds more than 20 patents and has a long history in industrial research. His work emphasizes accelerating the transformation of scientific breakthroughs into commercial products. Sreeram has also served on the White House’s President’s Council of Advisors on Science and Technology.

Another notable member of Indian origin is Pushmeet Kohli, a British Indian computer scientist and vice president of science and strategic initiatives at Google DeepMind. His work primarily focuses on machine learning and AI-driven discovery.

Officials have indicated that SCAC’s broad mandate includes advising on federal research priorities, facilitating collaboration across national laboratories and universities, and helping the Department of Energy anticipate and adapt to new technological trends. The committee is expected to play a strategic role as the U.S. navigates competition in critical fields such as quantum science and climate-related technologies.

DOE Under Secretary for Science Darío Gil, who oversees the Office of Science, highlighted the importance of diverse expertise in achieving the department’s mission. “By bringing together leading minds from diverse institutions, we’re forging a collaborative framework that will accelerate the translation of fundamental research into tangible benefits for the American people,” Gil stated.

The appointments reflect the growing influence of Indian-Americans in U.S. science and the DOE’s commitment to harnessing global talent to advance national research priorities. The advisory committee is set to serve through January 2028, with its findings expected to inform DOE decisions.

SCAC will be chaired by Persis Drell, a professor of materials science and engineering and physics at Stanford University, who is also the provost emerita of Stanford and director emerita of SLAC National Accelerator Laboratory. The committee will adopt the core functions of the Office of Science’s six former discretionary advisory committees.

According to The American Bazaar, the establishment of SCAC marks a significant step in integrating diverse expertise into U.S. energy policy and research initiatives.

149 Million Passwords Exposed in Major Credential Leak

Over 149 million stolen credentials, including 48 million Gmail accounts, were exposed online, raising significant concerns about password security and the risks associated with credential reuse.

A massive database containing 149 million stolen logins and passwords has been discovered publicly exposed online, marking a troubling start to the year for password security. Among the compromised data are credentials linked to an estimated 48 million Gmail accounts, as well as millions from other popular services.

Cybersecurity researcher Jeremiah Fowler, who uncovered the database, confirmed that it was neither password-protected nor encrypted. This means that anyone who stumbled upon it could access the sensitive information without any barriers.

The database comprises 149,404,754 unique usernames and passwords, totaling approximately 96 gigabytes of raw credential data. Fowler noted that the exposed files contained email addresses, usernames, passwords, and direct login URLs for various platforms. Some records even indicated the presence of info-stealing malware, which can silently capture credentials from infected devices.

Importantly, this incident does not represent a new breach of Google, Meta, or other companies. Instead, the database appears to be a compilation of credentials stolen over time from previous breaches and malware infections. While this distinction is critical, the risk to users remains substantial.

Fowler estimates that email accounts dominate the dataset, which is particularly concerning because access to an email account often facilitates access to other accounts. A compromised email inbox can be exploited to reset passwords, access private documents, read years of messages, and impersonate the account holder. The prevalence of Gmail credentials in this database raises alarms that extend beyond any single service.

This exposed database was not a relic of the past; the number of records increased while Fowler was investigating it, suggesting that the malware responsible for the data collection was still active. Additionally, there was no ownership information associated with the database. After multiple attempts to alert the hosting provider, it took nearly a month for the database to be taken offline. During that time, anyone with internet access could have searched through the data, heightening the stakes for everyday users.

It is crucial to note that hackers did not breach Google or Meta systems directly. Instead, malware infected individual devices and harvested login details as users typed them or stored them in browsers. This type of malware is often disseminated through fake software updates, malicious email attachments, compromised browser extensions, or deceptive advertisements. Changing passwords alone will not mitigate the risk if the malware remains on the device.

To protect yourself, it is essential to take proactive steps, even if everything appears fine at the moment. Credential leaks like this often resurface weeks or months later. One of the most significant risks highlighted by this database is password reuse. If attackers gain access to one working login, they frequently test it across multiple sites automatically.

Start by changing reused passwords, prioritizing email, financial, and cloud accounts. Each account should have a unique password. Consider using a password manager to securely store and generate complex passwords, which can significantly reduce the risk of password reuse.

Next, check if your email has been exposed in past breaches. Many password managers include a built-in breach scanner that can verify whether your email address or passwords have appeared in known leaks. If you find a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Passkeys are another option to consider, as they replace traditional passwords with device-based authentication tied to biometrics or hardware. This means there is nothing for malware to steal. Major platforms, including Gmail, already support passkeys, and their adoption is on the rise. Enabling passkeys now can significantly reduce your attack surface.

Implementing two-factor authentication (2FA) adds an extra layer of security, even if a password is compromised. Whenever possible, use authenticator apps or hardware keys instead of SMS for 2FA, as this step alone can thwart most account takeover attempts linked to stolen credentials.

Changing passwords will not be effective if malware remains on your device. It is vital to install robust antivirus software and conduct a full system scan. Remove anything flagged as suspicious before updating passwords or security settings. Keeping your operating system and browsers fully updated is also crucial.

To safeguard against malicious links that could install malware and potentially access your private information, having strong antivirus software on all your devices is essential. This protection can also alert you to phishing emails and ransomware scams, helping to keep your personal information and digital assets secure.

Most major services provide recent login locations, devices, and sessions. Regularly check for unfamiliar activity, particularly logins from new countries or devices. If you notice anything suspicious, sign out of all sessions if the option is available and reset your credentials immediately.

Stolen credentials are often combined with data scraped from data broker sites, which can include personal information such as addresses, phone numbers, relatives, and work history. Utilizing a data removal service can help reduce the amount of personal information criminals can pair with leaked logins. Less exposed data makes phishing and impersonation attacks more challenging to execute.

While no service can guarantee complete removal of your data from the internet, a data removal service is a wise choice. Though these services can be costly, they actively monitor and systematically erase your personal information from numerous websites, providing peace of mind and effectively reducing your risk of being targeted.

Old accounts can be easy targets, as users often forget to secure them. Closing unused services and deleting accounts tied to outdated app subscriptions or trials can reduce the number of potential entry points for attackers.

This exposed database serves as a stark reminder that credential theft has become an industrial-scale operation. Criminals act quickly and often prioritize speed over security. However, simple steps can still be effective. Unique passwords, strong authentication, malware protection, and basic cyber hygiene can significantly enhance your security. Remain vigilant and proactive in safeguarding your digital presence.

For further information on protecting your online accounts, visit CyberGuy.com.

Artificial Intelligence Drives Development of New Energy Sources

Artificial Intelligence is playing a pivotal role in addressing rising electricity costs and enhancing energy sources, as U.S. consumers face unprecedented power bills amid increasing demand.

Artificial Intelligence (AI) and the proliferation of data centers are significant contributors to the rising electricity costs across the United States. As of December 2025, American consumers are paying 42% more for electricity compared to a decade ago. Exelon CEO Calvin Butler emphasized, “When you have increased demand and inadequate supply, costs are going to go up. And that’s what we’re experiencing right now.”

In 2024, U.S. data centers accounted for over 4% of the total electricity consumption in the country, according to the International Energy Agency. This consumption level is comparable to the annual electricity usage of the entire nation of Pakistan. Projections indicate that U.S. data center electricity consumption could grow by 133% by the end of the decade, reaching levels equivalent to the entire electricity consumption of France.

Butler noted that Exelon, headquartered in Chicago and owner of ComEd—one of the largest utilities in the nation—has seen a significant increase in data center load. “ComEd’s peak load is roughly 23 gigawatts. We have had data center load come onto the system, but by 2030, we’ll be at 19 gigawatts,” he explained. The utility has received a surge of connection requests from data centers, with potential projects totaling over 30 gigawatts expected to come online between now and 2045.

Butler remarked on the unprecedented growth in the sector, stating, “With the data center advent and the technology coming, we’ve been forced to serve that load, which is our responsibility. But what we also have to do is build new generation supply, which is not keeping up with the load that is coming on. And that’s the crunch that we’re in right now.”

In response to the growing demand, Commonwealth Edison is seeking regulatory approval for a $15.3 billion grid update over the next four years. While the U.S. has increased its grid capacity by more than 15% in the past decade, many utility companies and energy producers argue that this expansion is insufficient.

Bob Mumgaard, CEO of Commonwealth Fusion Systems, expressed concern about the current electricity constraints. “You want to make power plants that can make a lot of power in a small package that you can put anywhere, that you could run at any time, and fusion fits that bill,” he said. The company is working to introduce a new form of nuclear energy—fusion—which promises the reliability of traditional nuclear energy without producing long-lived radioactive waste.

“In fusion, there’s no chain reaction. The result is helium, which is safe and inert, and you don’t use it to make anything related to weapons,” Mumgaard added.

As the U.S. grapples with its power crunch, the role of AI in energy innovation is becoming increasingly vital. Commonwealth Fusion Systems is leveraging AI to accelerate the development of fusion energy. “Building and designing these complex machines and manipulating this complex data matter of plasma are all things that we’re still learning and figuring out how to do,” Mumgaard explained. “And that’s an area where we’ve been able to accelerate using AI.”

AI is also poised to enhance under-utilized energy sources, particularly geothermal energy. Despite its potential, geothermal energy has remained a small part of the electric grid due to high drilling costs and uncertainty about optimal infrastructure placement. Joel Edwards, co-founder of Zanskar, highlighted the potential of AI in improving geothermal exploration. “If you could drill the perfect geothermal well every single time, like you pick the right spot, you design the right well, you drill the 5,000, 8,000 feet, you hit 400°F temperatures, that’s incredibly productive,” he stated.

Zanskar is focused on refining the geothermal search process through AI-driven mapping techniques to identify untapped resources. “If we could just get more precise in where we go to find the things and then how we drill into the things, geothermal absolutely has the cost curve to come down,” Edwards noted. “And that’s sort of what we’re running towards, with AI giving us the boost, giving us an edge to do that.”

Both geothermal and nuclear fusion energy sources offer the advantage of producing power consistently, regardless of weather conditions. This capability could have alleviated some of the strain on the grid during recent winter storms. Butler cautioned about the urgency of addressing these energy challenges, likening the situation to driving a car with a persistent check engine light. “We have to pay attention to what’s going on, and this winter storm—Winter Storm Fern—is indicative of what’s coming,” he warned.

The integration of AI into energy production and management is not only a response to rising costs but also a crucial step toward a more sustainable and reliable energy future. As the demand for electricity continues to grow, the role of innovative technologies like AI will be essential in meeting the challenges ahead, according to Fox News.

IIT Alum Sanjiban Choudhury Receives NSF Early Career Development Award

Sanjiban Choudhury, an Indian American robotics researcher, has received the National Science Foundation Faculty Early Career Development Award for his innovative work in robotics.

Sanjiban Choudhury, an Indian American robotics researcher, has been awarded the National Science Foundation (NSF) Faculty Early Career Development Award for his groundbreaking efforts in developing robots that learn new skills similarly to humans. Choudhury, who serves as an assistant professor of computer science at Cornell University’s Ann S. Bowers College of Computing and Information Science, will utilize the $400,000 award to further his research initiatives.

The NSF award is designed to support early-career faculty members who demonstrate the potential to become academic role models in both research and education. The award also aims to foster advancements within their respective departments or organizations. Each funded project must incorporate an educational component, emphasizing the importance of teaching alongside research.

Choudhury’s research focuses on creating robots that can assist in various environments, including homes, hospitals, and farms. While many existing robots are limited to pre-programmed tasks, they often struggle to adapt to new situations or learn from human interactions. Choudhury’s innovative project seeks to overcome these limitations by developing robot helpers capable of learning new skills through observation, practice, and feedback.

The implications of Choudhury’s work could significantly enhance the functionality and adaptability of robots, enabling them to tackle more complex real-world challenges. His research not only aims to improve robotic assistance in everyday tasks but also seeks to deepen our understanding of how robots can learn and adapt to their environments.

In addition to his research, Choudhury’s project includes educational programs designed to engage K-12 students through interactive robotics activities. By providing accessible online resources, he aims to increase participation in STEM fields and promote interest in robotics research among young learners.

Choudhury’s academic background is impressive. He completed his postdoctoral research at the University of Washington and earned both his Master’s and PhD degrees from Carnegie Mellon University. His undergraduate and Master’s degrees in electrical engineering were obtained from the Indian Institute of Technology, Kharagpur.

Choudhury also leads the Portal group, which focuses on developing everyday robots that are user-friendly and practical for tasks ranging from cooking to cleaning. His commitment to making robotics accessible to a broader audience underscores his dedication to advancing the field.

As robotics continues to evolve, Choudhury’s contributions may pave the way for a future where robots can seamlessly integrate into daily life, providing valuable assistance across various sectors.

According to a press release from Cornell University, Choudhury’s work exemplifies the potential of robotics to enhance human capabilities and improve quality of life.

AI Wearable Technology Aids Stroke Survivors in Regaining Speech

Researchers at the University of Cambridge have developed Revoice, a wearable device that significantly improves communication for stroke survivors suffering from dysarthria.

Losing the ability to speak clearly after a stroke can be a devastating experience. For many survivors, the words remain in their minds, but their bodies struggle to cooperate. This results in speech that is slow, unclear, or fragmented. Known as dysarthria, this condition affects nearly half of all stroke survivors, making everyday communication exhausting and frustrating.

In response to this challenge, scientists at the University of Cambridge have developed a groundbreaking wearable device called Revoice. Designed specifically for individuals with post-stroke speech impairment, Revoice aims to help users communicate naturally without the need for surgery or brain implants.

Dysarthria is a physical speech disorder that can weaken the muscles in the face, mouth, and vocal cords following a stroke. As a result, speech may sound slurred, slow, or incomplete. Many stroke survivors can only articulate a few words at a time, despite knowing exactly what they wish to convey. Professor Luigi Occhipinti notes that this disconnect can lead to profound frustration for those affected. While stroke survivors often work with speech therapists using repetitive drills to improve their communication skills, these exercises can take months or longer to yield results. This prolonged recovery period can leave patients struggling during daily interactions with family, caregivers, and healthcare providers.

Revoice offers a novel approach to addressing these communication barriers. Instead of requiring users to type, track their eye movements, or rely on invasive implants, the device detects subtle physical signals from the throat and neck. Resembling a soft, flexible choker made from breathable, washable fabric, Revoice contains ultra-sensitive textile strain sensors and a small wireless circuit board. When a user silently mouths words, the sensors pick up tiny vibrations in the throat muscles. Simultaneously, the device measures pulse signals in the neck to gauge the user’s emotional state.

The device processes these signals using two artificial intelligence (AI) agents, enabling Revoice to convert a few mouthed words into fluent speech in real-time. Previous silent speech systems faced significant limitations, often tested only on healthy volunteers and requiring users to pause for several seconds between words, which disrupted the flow of conversation. Revoice overcomes these delays by employing an AI-driven throat sensor system paired with a lightweight language model. This efficient model consumes minimal power and delivers near-instantaneous responses, powered by a 1,800 mWh battery that researchers anticipate will last a full day on a single charge.

After refining the system with healthy participants, researchers conducted tests with five stroke patients suffering from dysarthria. The results were striking. In one instance, a patient mouthed the phrase “We go hospital,” and Revoice expanded it into a complete sentence that conveyed urgency and frustration, based on the emotional signals and context. Participants reported a 55% increase in communication satisfaction, stating that the device helped them communicate as fluently as they did prior to their stroke.

Researchers believe that Revoice could also benefit individuals with Parkinson’s disease and motor neuron disease. Its comfortable, washable design makes it suitable for daily wear, allowing it to integrate seamlessly into users’ routines rather than being confined to clinical settings. However, before widespread adoption can occur, larger clinical trials are necessary. The research team plans to initiate broader studies with native English-speaking patients and aims to expand the system to support multiple languages and a wider range of emotional expressions. The findings of this research were published in the journal Nature Communications.

For those who have experienced a stroke or have loved ones who have, this research indicates a significant shift in recovery tools. Revoice suggests that effective speech assistance does not need to be invasive. A wearable solution could support communication during the challenging months of rehabilitation, a time when confidence and independence often wane. Additionally, it may alleviate stress for caregivers who struggle to understand incomplete or unclear speech. Clear communication can enhance medical care, emotional well-being, and daily decision-making.

Communication is closely tied to dignity and independence. For stroke survivors, losing the ability to speak can be one of the most difficult aspects of recovery. Revoice exemplifies how artificial intelligence and wearable technology can collaborate to restore something fundamentally human. While it is still in the early stages, this device represents a meaningful step toward making recovery feel less isolating and more hopeful.

If a simple wearable could help restore natural speech, should it become a standard part of stroke rehabilitation? The potential impact of Revoice on the lives of stroke survivors and their families is profound, and further exploration of this technology may pave the way for a new era in speech recovery.

According to Fox News, the advancements made with Revoice could redefine the rehabilitation process for countless individuals affected by speech impairments.

Researchers Identify Source of Black Hole’s 3,000-Light-Year Jet Stream

A new study connects the M87 black hole to its powerful cosmic jet, revealing how it launches particles at nearly the speed of light.

A recent study has established a link between the renowned M87 black hole—the first black hole ever imaged—and its formidable cosmic jet. This research sheds light on how black holes can launch particles at speeds approaching that of light.

Using significantly enhanced coverage from the global Event Horizon Telescope, scientists have traced a cosmic jet that extends 3,000 light-years from the M87 black hole to its probable source. The findings, published in the journal Astronomy & Astrophysics this week, could provide crucial insights into the origins and mechanisms behind the vast cosmic jets emitted by black holes.

Located in the Messier 87 galaxy approximately 55 million light-years from Earth, M87 is a supermassive black hole that is 6.5 billion times the mass of the sun. The first image of this black hole was unveiled to the public in 2019, following data collection by the Event Horizon Telescope in 2017.

Dr. Padi Boyd of NASA highlighted the significance of M87, stating in a video about the discovery that not only is the black hole supermassive, but it is also active. “Just a few percent are active at any given time,” she explained. “Are they turning on and then turning off? That’s an idea… We know there are very high magnetic fields that launch a jet. This image provides observational evidence that what we’ve been seeing for a while is actually being launched by a jet connected to that supermassive black hole at the center of M87.”

M87 is known for both consuming surrounding gas and dust while simultaneously ejecting powerful jets of charged particles from its poles, which form the jet stream, as reported by Scientific American and Space.com.

Saurabh, the team leader at the Max Planck Institute for Radio Astronomy, remarked on the implications of the study, stating, “This study represents an early step toward connecting theoretical ideas about jet launching with direct observations.” He further noted, “Identifying where the jet may originate and how it connects to the black hole’s shadow adds a key piece to the puzzle and points toward a better understanding of how the central engine operates.”

The Event Horizon Telescope is a collaborative network of eight radio observatories that work together to detect radio waves emitted by astronomical objects, such as galaxies and black holes. This network effectively creates an Earth-sized telescope, allowing for unprecedented observations of these distant phenomena. The term “Event Horizon” refers to the boundary of a black hole beyond which light cannot escape, as defined by the National Science Foundation.

The findings were derived from data collected by the Event Horizon Telescope in 2021. However, the authors of the study cautioned that while the results are robust under the assumptions and tests performed, definitive confirmation and more precise constraints will necessitate future observations with the Event Horizon Telescope. These future observations would require higher sensitivity, improved intermediate-baseline coverage through additional stations, and an expanded frequency range.

As researchers continue to explore the mysteries of black holes, this study marks a significant advancement in understanding the dynamics of cosmic jets and their connection to supermassive black holes like M87, paving the way for future discoveries in the field of astrophysics.

According to Space.com, the implications of this research extend beyond mere observation, potentially reshaping our understanding of black hole behavior and the fundamental processes that govern these enigmatic cosmic entities.

Indian-American Raj Badhwar Appointed CIO at SPA

Indian American Raj Badhwar has been appointed Chief Information Officer at Systems Planning & Analysis, where he will enhance technology capabilities for national security missions.

Indian American IT leader Raj Badhwar has joined Systems Planning & Analysis (SPA), a prominent provider of data-driven analytical insights for national security programs, as Chief Information Officer (CIO). The company is based in Alexandria, Virginia.

Badhwar is now a member of SPA’s Executive Leadership Team and reports directly to Chief Executive Officer Rich Sawchak, according to a recent company announcement.

In his role as CIO, Badhwar will oversee SPA’s enterprise information technology (IT) organization. His responsibilities encompass digital strategy, architecture, engineering, operations, data management, and business intelligence. He aims to deliver secure, resilient, and scalable technology solutions while enhancing cybersecurity platforms in collaboration with SPA’s business and mission teams.

“Raj brings deep expertise in cybersecurity, cloud, and enterprise IT that will be critical as SPA continues to grow and support increasingly complex national security missions,” Sawchak stated. “His leadership will help ensure our technology remains secure, modern, and aligned with both our customers’ needs and our long-term strategy.”

Badhwar’s immediate priorities include bolstering technology capabilities that support SPA’s national security clients, improving efficiency and scalability within the IT organization, and ensuring that technology investments are in line with mission delivery, business growth, and acquisition activities.

“My work at SPA will center on ensuring technology directly supports mission outcomes for our national security customers,” Badhwar explained. “That means strengthening security and resilience, simplifying operations as we scale, and advancing our cloud, data, and cybersecurity capabilities in a disciplined and trusted way.”

With over 30 years of experience leading secure technology and cybersecurity organizations across various sectors, including engineering, defense, financial services, and cloud platforms, Badhwar is well-equipped to help SPA establish a secure, cloud-enabled, and data-driven technology foundation for future national security missions.

Badhwar holds a master’s degree in information systems technology from George Washington University and a bachelor’s degree in electrical and electronics engineering from Karnatak University in Dharwad, India.

The information regarding Badhwar’s appointment was reported by The American Bazaar.

Major U.S. Shipping Platform Exposed Customer Data to Hackers

Hackers are increasingly targeting global shipping technology, exposing vulnerabilities that could lead to significant cargo theft and supply chain disruptions.

In recent months, cybersecurity experts have raised alarms about the growing threat of hackers targeting the technology that underpins global shipping. This trend has shifted the focus of cargo theft from traditional methods, such as stolen trucks and forged paperwork, to sophisticated cyberattacks that manipulate logistics systems managing goods worth millions of dollars.

One notable incident involves Bluspark Global, a New York-based shipping technology provider. Its Bluvoyix platform is utilized by numerous companies to manage and track freight worldwide. Although Bluspark is not a household name, its software plays a crucial role in the operations of major retailers, grocery chains, and manufacturers.

For several months, Bluspark’s systems reportedly contained significant security vulnerabilities that left its platform exposed to potential attackers on the internet. The company acknowledged that five vulnerabilities were eventually addressed, including the use of plaintext passwords and the ability to remotely access and interact with the Bluvoyix platform. These flaws could have allowed hackers to access decades of shipment records and sensitive customer data.

While Bluspark claims that these issues have been resolved, the timeline leading up to the fixes raises serious concerns about the duration of the platform’s vulnerability and the challenges in notifying the company about the issues.

Security researcher Eaton Zveare discovered the vulnerabilities in October while examining a Bluspark customer’s website. What began as a routine review of a contact form quickly escalated into a deeper investigation. By analyzing the website’s source code, Zveare found that messages sent through the form were processed via Bluspark’s servers using an application programming interface (API).

As Zveare delved further, he uncovered that the API’s documentation was publicly accessible and included a feature that allowed anyone to test commands. Despite claims that authentication was necessary, the API returned sensitive data without requiring any login credentials. Zveare was able to extract extensive user account information, including employee and customer usernames and passwords stored in plaintext.

Even more alarming, the API permitted the creation of new administrator-level accounts without adequate security checks. This meant that an attacker could potentially gain full access to the Bluvoyix platform and view shipment data dating back to 2007. Security tokens intended to restrict access could also be bypassed entirely.

Perhaps the most troubling aspect of this situation is not just the vulnerabilities themselves, but the difficulty Zveare faced in getting them addressed. After discovering the flaws, he spent weeks attempting to contact Bluspark through emails, voicemails, and LinkedIn messages, all to no avail.

With no clear process for disclosing vulnerabilities, Zveare eventually sought assistance from Maritime Hacking Village, an organization that helps researchers notify companies in the shipping and maritime sectors. When that effort failed, he turned to the media as a last resort. It was only after engaging the press that Bluspark responded, albeit through its legal counsel.

Following the media coverage, Bluspark confirmed that it had patched the vulnerabilities and announced plans to establish a formal vulnerability disclosure program. However, the company has not disclosed whether it found evidence that attackers exploited these bugs to manipulate shipments, stating only that there was no indication of customer impact. Additionally, Bluspark declined to provide details about its security practices or any third-party audits.

The incident underscores the reality that hackers can infiltrate shipping and logistics platforms without users ever realizing their data has been compromised. As a precaution, experts recommend several steps to mitigate risks associated with such attacks.

After a supply chain breach, criminals often send phishing emails or texts impersonating shipping companies, retailers, or delivery services. If you receive a message urging you to click a link or “confirm” shipment details, take a moment to verify its authenticity by visiting the retailer’s website directly.

Moreover, if attackers gain access to customer databases, they may attempt to use the same login credentials across various platforms. Utilizing a password manager can help ensure that each account has a unique password, preventing a single breach from compromising multiple accounts.

It is also advisable to check whether your email has been exposed in previous breaches. Many password managers include built-in breach scanners that can alert you if your email address or passwords have appeared in known leaks. If you find a match, promptly change any reused passwords and secure those accounts with new, unique credentials.

Given that criminals often combine data from different breaches with information gathered from data broker sites, personal data removal services can help minimize the amount of publicly available information about you. While no service can guarantee complete removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind.

Additionally, strong antivirus software can block malicious links, fake shipping pages, and malware-laden attachments that often follow high-profile breaches. Keeping real-time protection enabled is crucial for safeguarding personal information and digital assets.

Implementing two-factor authentication (2FA) can significantly enhance account security, making it much harder for attackers to take over accounts even if they have obtained your password. It is essential to prioritize 2FA for email, shopping accounts, cloud storage, and any service that stores payment or delivery information.

In the aftermath of such incidents, it is also wise to monitor online shopping accounts for unfamiliar orders, address changes, or saved payment methods that you do not recognize. Early detection can prevent fraud from escalating.

Identity theft protection services can alert you to suspicious credit activity and assist in recovery if attackers access your personal details. These services monitor personal information, such as Social Security numbers and email addresses, and can notify you if they are being sold on the dark web or used to open new accounts.

In light of this incident, companies that rely on shipping and logistics platforms should take this as a reminder to review vendor access controls. Limiting administrative permissions, regularly rotating API keys, and ensuring vendors have a clear vulnerability disclosure process are critical steps in enhancing supply chain security.

As shipping platforms operate at the intersection of physical goods and digital systems, they remain attractive targets for cybercriminals. When basic protections like authentication and password encryption are absent, the consequences can extend beyond digital breaches, leading to stolen cargo and significant disruptions in the supply chain.

The incident involving Bluspark Global highlights the urgent need for companies to adopt robust security measures and establish transparent processes for reporting vulnerabilities. As the threat landscape continues to evolve, it is imperative for organizations to remain vigilant in protecting their systems and customer data.

For further insights on cybersecurity and data protection, please refer to CyberGuy.com.

Spectacular Blue Spiral Light in Night Sky Likely from SpaceX Rocket

A stunning blue spiral light, likely from a SpaceX Falcon 9 rocket, illuminated the night sky over Europe on Monday, captivating viewers and sparking social media excitement.

A mesmerizing blue light spiraled through the night sky over Europe on Monday, captivating onlookers and igniting discussions across social media platforms. Experts suggest that this striking phenomenon was caused by the SpaceX Falcon 9 rocket booster re-entering the Earth’s atmosphere.

Time-lapse footage captured from Croatia around 4 p.m. EST (9 p.m. local time) showcased the glowing spiral, which many observers likened to a cosmic whirlpool or a spiral galaxy. The full video, recorded at normal speed, lasts approximately six minutes, providing a stunning visual of the event.

The U.K.’s Met Office reported receiving numerous accounts of an “illuminated swirl in the sky,” confirming that it was likely related to the SpaceX rocket launch from Cape Canaveral, Florida. The Falcon 9 rocket lifted off at around 1:50 p.m. EST as part of the classified NROL-69 mission for the National Reconnaissance Office (NRO), the U.S. government’s intelligence and surveillance agency.

“This is likely to be caused by the SpaceX Falcon 9 rocket, launched earlier today,” the Met Office stated on X (formerly Twitter). “The rocket’s frozen exhaust plume appears to be spinning in the atmosphere and reflecting sunlight, which causes it to appear as a spiral in the sky.”

This glowing spectacle is a phenomenon often referred to as a “SpaceX spiral,” according to Space.com. Such spirals typically occur when the upper stage of a Falcon 9 rocket separates from its first-stage booster. As the upper stage continues its ascent into space, the lower stage descends back to Earth, releasing any remaining fuel. The fuel then freezes almost instantly at high altitudes, and sunlight reflects off the frozen particles, creating the striking visual effect.

Fox News Digital reached out to SpaceX for further comment but did not receive an immediate response. The timing of Monday’s celestial display was notable, as it followed closely on the heels of a successful SpaceX mission that saw a team working with NASA return two stranded astronauts from space.

The captivating blue spiral not only delighted viewers but also underscored the intricate and often dramatic nature of space exploration and rocket launches. As SpaceX continues to push the boundaries of aerospace technology, such visual phenomena are likely to become more common, further enchanting audiences around the globe.

According to Space.com, the occurrence of these spirals is a fascinating byproduct of modern rocket launches, blending science and spectacle in the night sky.

Philanthropists Chandrika and Ranjan Tandon Fund $11 Million AI School at IIM Ahmedabad

The Indian Institute of Management Ahmedabad has partnered with philanthropists Chandrika and Ranjan Tandon to establish a new school focused on artificial intelligence, supported by an $11 million endowment.

NEW DELHI – The Indian Institute of Management Ahmedabad (IIMA) has entered into a Memorandum of Understanding with philanthropist and alumna Chandrika Krishnamurthy Tandon and her husband, Ranjan Tandon, to create the Krishnamurthy Tandon School of Artificial Intelligence. This initiative is backed by a substantial endowment of ₹100 crore, equivalent to approximately $11 million.

The agreement was formalized in New Delhi, with Union Education Minister Dharmendra Pradhan in attendance. India’s Ambassador to the United States, Vinay Kwatra, participated in the event virtually.

The newly proposed school will function as a specialized center within IIMA, focusing on artificial intelligence at the intersection of technology, management, and public policy. According to a statement, the school will emphasize real-world applications and societal impact.

During the event, Minister Pradhan highlighted that this agreement is in line with preparations for the upcoming India–AI Impact Summit 2026. He noted that the initiative reflects ongoing efforts under Prime Minister Narendra Modi to enhance India’s global standing in the field of artificial intelligence. Pradhan emphasized that India’s advancements in AI will rely heavily on robust institutions and skilled human capital, in addition to technological capabilities.

The minister also praised the philanthropic efforts of the Tandon family, stating that alumni-led initiatives play a crucial role in strengthening academic institutions and expanding national capacity in emerging technologies.

The Krishnamurthy Tandon School of Artificial Intelligence aims to serve as a hub for collaboration among faculty, industry leaders, policymakers, and global partners. Its mission will include the development of application-led and case-based AI research, with a strong focus on translating research findings into practical solutions for business, governance, and social sectors.

Among those present at the signing ceremony were Higher Education Secretary Dr. Vineet Joshi, IIMA Director Prof. Bharat Bhasker, Joint Secretary (Higher Education) Purnendu Banerjee, and other senior representatives from the ministry.

This significant investment in education and technology underscores the growing importance of artificial intelligence in India and reflects a commitment to fostering innovation and leadership in this critical field, according to India West.

Under Armour Data Breach Affects Millions of Users Worldwide

Under Armour is investigating a significant data breach affecting approximately 72 million customers, following the online posting of sensitive records by hackers.

Sportswear and fitness brand Under Armour is currently probing claims of a substantial data breach after customer records were discovered on a hacker forum. The breach came to light when millions of users received alerts indicating that their personal information may have been compromised.

While Under Armour maintains that its investigation is ongoing, cybersecurity experts analyzing the leaked data suggest it contains personal details that could be linked to customer purchases. The breach notification service Have I Been Pwned reported that the dataset includes email addresses associated with around 72 million individuals, prompting the organization to directly notify affected users.

The scale of this exposure has raised significant concerns regarding the potential misuse of consumer data long after a breach has occurred. The stolen data is reportedly tied to a ransomware attack that took place in November 2025, for which the Everest ransomware group claimed responsibility. This group attempted to extort Under Armour by threatening to leak internal files.

In January 2026, customer data from this incident surfaced on a popular hacking forum. Shortly thereafter, Have I Been Pwned obtained a copy of the data and began alerting affected users via email. Reports indicate that the seller claimed the stolen files originated from the November breach and included millions of customer records.

The leaked dataset is believed to encompass a wide range of personal information. While there has been no confirmation regarding the exposure of payment card details, the data remains highly valuable to cybercriminals. Compromised information may include names, email addresses, birth dates, and purchase histories, which can be exploited to create convincing scams.

Researchers have also identified email addresses belonging to Under Armour employees within the leaked data, increasing the risk of targeted phishing and business email compromise scams. An Under Armour spokesperson stated, “We are aware of claims that an unauthorized third party obtained certain data. Our investigation of this issue, with the assistance of external cybersecurity experts, is ongoing. Importantly, at this time, there’s no evidence to suggest this issue affected UA.com or systems used to process payments or store customer passwords. Any implication that sensitive personal information of tens of millions of customers has been compromised is unfounded. The security of our systems and data is a top priority for UA, and we take this issue very seriously.”

Even in the absence of passwords or payment details, this breach poses serious risks. Names, email addresses, birth dates, and purchase histories can be used to craft highly convincing phishing attempts. Cybercriminals often reference actual purchases or account details to gain the trust of their targets. Consequently, phishing emails related to this breach may appear legitimate and urgent.

Over time, exposed data can be combined with information from other breaches to create detailed identity profiles that are increasingly difficult to protect against. To determine if your email has been affected, visit the Have I Been Pwned website, which serves as the official source for this newly added dataset. Enter your email address to check if your information appears in the leak.

If you received a breach alert or suspect your information may be included, taking immediate action can help mitigate future risks. If you have reused the same password across multiple sites, it is advisable to change those passwords promptly. Even if Under Armour asserts that passwords were not compromised, exposed email addresses can be used in follow-up attacks.

Utilizing a password manager can simplify this process by generating strong, unique passwords for each account and securely storing them. This way, a single breach cannot jeopardize multiple accounts. Additionally, check if your email has been exposed in previous breaches. Many password managers now include a built-in breach scanner that verifies whether your email address or passwords have appeared in known leaks. If you find a match, change any reused passwords immediately and secure those accounts with new, unique credentials.

Cybercriminals often act swiftly following a breach. As a result, emails that seem to originate from Under Armour or other fitness brands may appear in your inbox. Exercise caution with messages claiming there is an issue with your account or a recent purchase. Avoid clicking links or opening attachments in unexpected emails; instead, visit the company’s official website directly if you need to verify your account.

Employing robust antivirus software can also help block malicious links and attachments before they can cause harm. To protect yourself from harmful links that may install malware and potentially access your private information, ensure you have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, safeguarding your personal information and digital assets.

Implementing two-factor authentication (2FA) adds an additional layer of security. Even if someone obtains your password, they would still require a second step to log in. Start by enabling 2FA for your email accounts, then extend it to shopping, fitness, and financial accounts. This simple measure can prevent many account takeover attempts linked to breached data.

After a breach, attackers frequently test stolen email addresses across various sites, which can trigger password reset emails that you did not request. Pay close attention to these alerts. If you receive one, secure the account immediately by changing the password and reviewing recent activity.

The Under Armour data breach serves as a reminder that even major global brands can become targets. While payment systems appear unaffected, the exposure of personal data still presents long-term risks for millions of customers. Data breaches often unfold over time, and what begins as leaked records can later fuel scams, identity theft, and targeted attacks. Remaining vigilant now can help reduce the likelihood of more significant issues in the future.

For further information, visit Cyberguy.com, where you can find expert-reviewed password managers, antivirus solutions, and data removal services to help protect your personal information.

According to CyberGuy, the Under Armour data breach highlights the ongoing risks associated with data security in the digital age.

Elon Musk Considers Company Merger Ahead of SpaceX IPO

Elon Musk is considering a merger of his companies, including SpaceX and xAI, as the rocket manufacturer prepares for a significant IPO this year.

Elon Musk, the CEO of Tesla, is reportedly exploring the possibility of merging his various companies, including SpaceX and xAI. This move comes in the wake of his decision to utilize Tesla funds to support xAI, raising questions among investors about the potential synergies between Musk’s ventures in space exploration, autonomous driving, and artificial intelligence.

According to a report by Bloomberg, SpaceX is in discussions regarding a merger with Tesla, Musk’s electric vehicle company. Gene Munster, a Tesla shareholder and managing partner at xAI investor Deepwater Asset Management, expressed optimism about the merger’s likelihood, stating, “I think it’s highly likely that (xAI) ends up with one of the two parties.”

As SpaceX prepares for a major public offering scheduled for this year, the potential merger with xAI could consolidate Musk’s diverse portfolio, which includes rockets, Starlink satellites, the X social media platform, and the Grok chatbot. This consolidation could streamline operations and enhance strategic coherence across Musk’s enterprises, according to sources familiar with the discussions and regulatory filings.

Dennis Dick, chief market strategist at Stock Trader Network, commented on Musk’s expansive business interests, noting, “Musk has too many separate companies. A major risk thesis for Tesla is that Musk is spreading himself out too much. As a Tesla shareholder, I applaud further consolidation.”

If the merger between SpaceX and xAI proceeds, it is expected that xAI shares would be exchanged for SpaceX shares. This consolidation could represent a significant shift in how Musk manages his extensive business empire, potentially allowing for greater integration of technologies developed across his various companies.

By centralizing operations, Musk could accelerate innovation and streamline decision-making processes, reducing redundancies in research, development, and operations. For investors, a unified structure may clarify growth prospects and simplify valuations, addressing concerns about Musk’s divided attention among multiple high-profile ventures.

From a competitive standpoint, merging these assets could strengthen SpaceX’s position in emerging technology markets, particularly in artificial intelligence and autonomous systems. By aligning expertise, talent, and technological capabilities under one organizational umbrella, Musk may be better equipped to tackle ambitious projects that span multiple industries, including aerospace, defense, and AI-driven commercial applications.

Incorporating xAI into SpaceX’s operations could also enhance the company’s prospects for securing contracts with the Pentagon, which has been actively seeking to increase AI adoption within military networks. Caleb Henry, an analyst at Quilty Analytics, highlighted this potential advantage, noting that the merger could position SpaceX favorably in the defense sector.

However, merging different corporate cultures, compliance requirements, and financial structures could pose challenges. If not managed carefully, these complexities could create friction or slow down execution, impacting both short-term performance and long-term strategic outcomes. How Musk navigates these challenges will likely play a crucial role in the success of the merger.

Ultimately, the potential consolidation of Musk’s companies reflects his ambition to create a cohesive ecosystem of interrelated technologies. This strategy could position SpaceX and his other ventures for a new era of innovation and market influence, although the outcome remains uncertain and contingent upon regulatory approvals, investor support, and effective execution.

The broader implications of such a merger could reshape investor perceptions of Musk’s ventures, potentially attracting capital from those interested in a unified tech ecosystem. Market reactions may vary based on the effectiveness of the integration process, and analysts will likely debate whether the potential synergies outweigh the risks associated with overconcentration. Additionally, this move could prompt competitors to reevaluate their strategies, considering partnerships or mergers to remain competitive in overlapping sectors.

As the situation develops, stakeholders will be closely monitoring Musk’s next steps and the potential impact on the tech landscape.

According to Bloomberg, the discussions surrounding the merger are ongoing, and the final outcome will depend on various factors, including regulatory approvals and investor sentiment.

Humanoid Robot Designs Building, Making Architectural History

Ai-Da Robot has made history as the first humanoid robot to design a building, presenting a modular housing concept for future lunar and Martian bases at the Utzon Center in Denmark.

At the Utzon Center in Denmark, Ai-Da Robot, recognized as the world’s first ultra-realistic robot artist, has achieved a groundbreaking milestone by becoming the first humanoid robot to design a building. The project, titled Ai-Da: Space Pod, introduces a modular housing concept intended for future bases on the Moon and Mars.

This innovative endeavor marks a significant shift in Ai-Da’s capabilities, moving from creating art to conceptualizing physical spaces for both humans and robots. Previously, Ai-Da garnered attention for her work in drawing, painting, and performance art, which sparked global discussions about the role of robots in creative fields.

The exhibition “I’m not a robot,” currently on display at the Utzon Center, runs through October and delves into the creative potential of machines. As robots increasingly demonstrate the ability to think and create independently, visitors to the exhibition can engage with Ai-Da’s drawings, paintings, and architectural designs. The exhibition also features a glimpse into Ai-Da’s creative process through sketches, paintings, and a video interview.

Ai-Da is not merely a digital avatar or animation; she possesses camera eyes, advanced AI algorithms, and a robotic arm that enables her to draw and paint in real time. Developed in Oxford and constructed in Cornwall in 2019, Ai-Da’s versatility spans multiple disciplines, including painting, sculpture, poetry, performance, and now architectural design.

Aidan Meller, the creator of Ai-Da and Director of Ai-Da Robot, explains the significance of the Space Pod concept. “Ai-Da presents a concept for a shared residential area called Ai-Da: Space Pod, foreshadowing a future where AI becomes an integral part of architecture,” he states. “With intelligent systems, a building will be able to sense and respond to its occupants, adjusting light, temperature, and digital interfaces according to needs and moods.”

The Space Pod design is intentionally modular, allowing each unit to connect with others through corridors, fostering a shared residential environment. Ai-Da’s artistic vision includes a home and studio suitable for both humans and robots. According to her team, these designs could evolve into fully realized architectural models through 3D renderings and construction, potentially adapting to planned Moon or Mars base camps.

While the concept primarily targets future extraterrestrial bases, it is also feasible to create a prototype on Earth. This aspect is particularly relevant as space agencies prepare for extended missions beyond our planet. Meller emphasizes the timeliness of the project, noting, “With our first crewed Moon landing in 50 years scheduled for 2027, Ai-Da: Space Pod is a simple unit connected to other Pods via corridors.” He adds, “Ai-Da is a humanoid designing homes, which raises questions about the future of architecture as powerful AI systems gain greater agency.”

The exhibition aims to provoke thought and discomfort regarding the rapid pace of technological advancement. Meller points to developments in emotional recognition through biometric data, CRISPR gene editing, and brain-computer interfaces, each carrying both promise and ethical risks. He references dystopian themes from literature, such as Aldous Huxley’s “Brave New World,” and cautions about the potential misuse of powerful technologies.

Line Nørskov Davenport, Director of Exhibitions at the Utzon Center, describes Ai-Da as a “confrontational” figure, stating, “The very fact that she exists is confrontational. Ai-Da is an AI shaker, a conversation starter.” This exhibition transcends the realms of robotics and space exploration, highlighting the swift transition of AI from a creative tool to a decision-maker in architecture and housing.

As AI begins to influence the design of living spaces, critical questions about control, ethics, and accountability arise. If a robot can conceptualize homes for the Moon, it raises concerns about how such technology might shape building functionality on Earth.

Ai-Da’s work challenges the notion of what is possible for humanoid robots and their role in society. Her presence in a major cultural institution ignites discussions about creativity, technology, and responsibility. As the boundaries between human and machine continue to blur, the implications of AI’s involvement in architecture and design become increasingly significant.

The question remains: if AI can design the homes of our future, how much creative control should humans be willing to relinquish? This inquiry invites ongoing dialogue about the intersection of technology and human creativity.

According to CyberGuy, Ai-Da’s Space Pod serves as a catalyst for critical reflection on the evolving relationship between humans and artificial intelligence.

Wolf Species Extinct for 12,500 Years Resurrected, Company Claims

A Dallas-based company claims to have resurrected the dire wolf, an extinct species that last roamed the Earth over 12,500 years ago, using advanced genetic technologies.

A U.S. company, Colossal Biosciences, has announced a groundbreaking achievement: the revival of the dire wolf, a species that has been extinct for more than 12,500 years. The dire wolf, made famous by the HBO series “Game of Thrones,” is said to have been brought back to life through innovative genome-editing and cloning techniques.

According to Colossal Biosciences, this marks the world’s first successful instance of what they term a “de-extincted animal.” However, some experts have raised concerns, suggesting that the company may have merely genetically modified existing wolves rather than truly resurrecting the extinct apex predator.

Historically, dire wolves roamed the American midcontinent during the Ice Age. The oldest confirmed fossil of a dire wolf, dating back approximately 250,000 years, was discovered in the Black Hills of South Dakota. In “Game of Thrones,” these wolves are portrayed as larger and more intelligent than their modern counterparts, exhibiting fierce loyalty to the Stark family, a central noble house in the series.

Colossal’s project has produced three litters of dire wolves, including two adolescent males named Romulus and Remus, and a female puppy called Khaleesi. The scientists utilized blood cells from a living gray wolf and employed CRISPR technology—short for “clustered regularly interspaced short palindromic repeats”—to make genetic modifications at 20 different sites. According to Beth Shapiro, Colossal’s chief scientist, these modifications were designed to replicate traits believed to have helped dire wolves survive in cold climates during the Ice Age, such as larger body sizes and longer, fuller, light-colored fur.

Of the 20 genome edits made, 15 correspond to genes identified in actual dire wolves. The ancient DNA used in the project was extracted from two fossils: a tooth from Sheridan Pit, Ohio, approximately 13,000 years old, and an inner ear bone from American Falls, Idaho, dating back around 72,000 years.

The genetic material was transferred into an egg cell from a domestic dog, and the embryos were subsequently implanted into surrogate domestic dogs. After a gestation period of 62 days, the genetically engineered pups were born.

Ben Lamm, CEO of Colossal Biosciences, described this achievement as a significant milestone, emphasizing that it represents the first of many examples showcasing the effectiveness of the company’s comprehensive de-extinction technology. “It was once said, ‘any sufficiently advanced technology is indistinguishable from magic,’” Lamm stated. “Today, our team gets to unveil some of the magic they are working on and its broader impact on conservation.”

Colossal Biosciences has previously announced similar initiatives aimed at genetically altering cells from living species to create animals resembling other extinct species, such as woolly mammoths and dodos. In addition to the dire wolves, the company recently reported the birth of two litters of cloned red wolves, which are critically endangered. This development is seen as evidence of the potential for conservation through de-extinction technology.

During a recent announcement, Lamm mentioned that the team had met with officials from the Interior Department in late March regarding their projects. Interior Secretary Doug Burgum praised the work on social media, calling it a “thrilling new era of scientific wonder.” However, some scientists have expressed skepticism about the feasibility of restoring extinct species.

Corey Bradshaw, a professor of global ecology at Flinders University in Australia, voiced concerns about the claims made by Colossal Biosciences. “So yes, they have slightly genetically modified wolves, maybe, and that’s probably the best that you’re going to get,” Bradshaw remarked. “And those slight modifications seem to have been derived from retrieved dire wolf material. Does that make it a dire wolf? No. Does it make a slightly modified gray wolf? Yes. And that’s probably about it.”

Colossal Biosciences asserts that the wolves are currently thriving in a secure 2,000-acre ecological preserve in Texas, which is certified by the American Humane Society and registered with the USDA. Looking ahead, the company plans to restore the species in secure and expansive ecological preserves, potentially on indigenous land.

This ambitious project raises important questions about the future of conservation and the ethical implications of de-extinction efforts. As the debate continues, the work of Colossal Biosciences may pave the way for new approaches to preserving biodiversity.

According to Fox News, the implications of this project extend beyond mere scientific curiosity, potentially influencing conservation strategies for endangered species in the years to come.

Samsung Galaxy S26 Ultra Leaks Reveal February 2026 Launch Details

Leaks suggest that Samsung will unveil its Galaxy S26 series, including the Galaxy S26 Ultra, during a Galaxy Unpacked event on February 25, 2026, with a likely on-sale date in March.

Samsung enthusiasts are gearing up for one of the most significant smartphone launches of 2026, as recent leaks and industry hints indicate a Galaxy Unpacked event scheduled for February 25, 2026. During this event, Samsung is expected to unveil its next-generation Galaxy S26 lineup, which includes the Galaxy S26, Galaxy S26+, and Galaxy S26 Ultra.

Traditionally, Samsung kicks off its flagship smartphone cycle with the Galaxy S series, typically announcing new models in January or February. However, this year’s unveiling appears to be more than a month later than usual, a shift that has generated considerable excitement among fans eager to see what innovations the South Korean tech giant will introduce.

Insider tipster Evan Blass recently shared a leaked invitation on X, confirming the February 25 launch date for the Galaxy Unpacked event. The teaser image also hints at the simultaneous launch of Samsung’s next-generation Galaxy Buds 4 and Buds 4 Pro, making this event a significant occasion for multiple new product introductions. This confirmed date aligns with various recent leaks and supports ongoing rumors regarding the phone’s launch timeline.

The Galaxy S26 series is anticipated to follow a familiar three-model structure: standard, Plus, and Ultra. This return to a traditional format comes after the Galaxy S25 Edge was reportedly dropped due to lackluster sales.

In terms of display and design, all models are expected to feature high-quality AMOLED displays with 120Hz refresh rates, improved brightness, and enhanced viewing angles. Some variants may also incorporate new privacy display technology to protect on-screen content from prying eyes.

Performance-wise, the base Galaxy S26 and S26+ may utilize Samsung’s in-house Exynos 2600 chipset, while the S26 Ultra is likely to be powered by Qualcomm’s Snapdragon 8 Elite Gen 5, a robust flagship processor.

Camera capabilities are also set to receive a significant upgrade, with early reports indicating that the Ultra model will feature a 200-megapixel main sensor. This will be complemented by advanced cropping or zoom solutions and wider aperture lenses designed to enhance low-light photography.

Additionally, leaked information suggests that the entire Galaxy S26 range may support upgraded wireless charging and MagSafe-style accessories through Qi2 compatibility.

While Samsung has yet to officially confirm the launch dates, leaks from various sources, including tipsters like Ice Universe, suggest the following timeline:

Galaxy Unpacked Event: February 25, 2026

Pre-Orders Start: Around February 26

Pre-Sale Period: Early March

Official On-Sale Date: Around March 11, 2026

These dates may vary slightly by region, but the overall trend indicates a late February introduction followed by a March market debut.

As for pricing, the expected costs for the Galaxy S26 series in India are as follows:

The Galaxy S26 is likely to start at around ₹84,999, with a base storage option of 256GB, as the 128GB variant may be discontinued. Higher storage options, such as 512GB, are expected to be priced above the entry-level model.

The Galaxy S26 Plus is anticipated to have a starting price of approximately ₹1,04,999, with the base 256GB variant remaining similar to last year’s model. The 512GB variant is likely to be priced higher than previous Plus models.

For the Galaxy S26 Ultra, the expected starting price is around ₹1,34,999. The 256GB and 512GB versions may be slightly cheaper than their S25 Ultra counterparts, while the 1TB variant is expected to maintain a price similar to last year’s Ultra model.

The delay in the launch of the Galaxy S26 series is noteworthy for fans and potential buyers. Historically, Samsung has unveiled its Galaxy S-series smartphones in late January or early February, as seen with the Galaxy S25 launch in January 2025. This year’s later debut may be attributed to strategic changes in the lineup and product planning.

This delay has heightened anticipation, with fans speculating that Samsung might be fine-tuning hardware upgrades, storage options, and design features. As the February 25 event approaches, more detailed leaks regarding specifications and pricing are expected to surface.

For tech enthusiasts and smartphone buyers, the late February launch offers a compelling reason to postpone upgrades until Samsung’s next flagship arrives. With anticipated improvements across display, chipset, camera, battery, and AI features, the Galaxy S26 series is poised to compete vigorously in the premium smartphone segment.

The introduction of new Galaxy Buds at the same event further enhances the value of the February 25 Unpacked, making it one of the most eagerly awaited tech events of early 2026.

These insights into the upcoming Galaxy S26 series are based on leaks and industry speculation, according to The Sunday Guardian.

Startup Bazaar to Host Events in UAE on January 31 and February 2

The American Bazaar’s Startup Bazaar series will debut in the UAE with events in Abu Dhabi and Dubai, focusing on AI and emerging technologies.

The American Bazaar is set to launch its flagship Startup Bazaar series in the United Arab Emirates, featuring back-to-back events on January 31, 2026, in Abu Dhabi and February 2, 2026, in Dubai. These events aim to unite startup founders, investors, and leaders in the tech ecosystem to explore and showcase innovations in artificial intelligence and other emerging technologies.

Positioned at the intersection of technology, investment, and policy, the Startup Bazaar events promise a vibrant mix of ideas, discussions, and networking opportunities that will help shape the future of AI-driven entrepreneurship.

The Abu Dhabi event will take place on January 31, while the Dubai event is scheduled for February 2. Both events are organized in partnership with Talrop, an India-based technology and innovation company dedicated to fostering startups, developing digital products, and nurturing tech talent across the Gulf Cooperation Council (GCC) region.

These gatherings are expected to attract U.S.-based investors alongside their counterparts from the GCC and India, as well as senior executives and high-growth founders. This diverse mix will facilitate a unique cross-border exchange of insights and perspectives.

As the UAE continues to establish itself as a global hub for advanced technologies, the Startup Bazaar will highlight innovations in AI, deep tech, and other frontier technologies, particularly in the energy, healthtech, and pharmaceutical sectors. These discussions are anticipated to contribute to economic transformation and create tangible impacts in the region.

“The UAE is emerging as one of the most exciting and execution-focused AI startup ecosystems globally,” said Sanjay Puri, a member of the U.S. investor delegation attending the events. “This delegation presents a valuable opportunity to engage with founders, universities, family offices, and industry leaders like G42, exploring how talent, capital, and policy are converging at scale. I am particularly interested in how the region is translating research and ambition into globally competitive AI companies, and I see significant potential for long-term cross-border partnerships and investment.”

Designed to be more than a traditional conference, Startup Bazaar offers an immersive experience for startup founders, technologists, investors, policymakers, corporate innovation leaders, researchers, and professionals. Attendees will have the chance to engage directly with the U.S. delegation, which includes angel investors and AI experts.

A highlight of both events will be the Startup Showcase, where selected startups will pitch their ideas to potential investors. For founders seeking visibility, feedback, and funding opportunities, this showcase serves as a direct gateway to international markets.

As Startup Bazaar makes its debut in Abu Dhabi and Dubai, it not only fosters conversations about innovation but also brings together the people, capital, and ambition necessary to drive future advancements.

For those interested in attending, registration is now open for both the Abu Dhabi and Dubai editions of Startup Bazaar.

According to The American Bazaar, the series promises to be a significant event in the region’s tech landscape.

Dr. Satheesh Kathula Appointed Chair of Board of Directors, Indo-American Press Club

The Indo-American Press Club (IAPC), the largest and most influential organization representing journalists and media professionals of Indian origin across North America, has announced the appointment of Dr. Satheesh Kathula as Chair of its Board of Directors for 2026. A distinguished oncologist, community leader, and immediate past president of the American Association of Physicians of Indian Origin (AAPI), Dr. Kathula brings more than two decades of leadership and public service to this prominent role.

Dr. Kathula has served as a practicing oncologist for nearly 25 years, earning widespread respect for his compassionate care and contributions to the advancement of cancer treatment.

His association with IAPC spans many years. In 2005, he received the organization’s prestigious Leadership Award in recognition of his service and advocacy.

Accepting the new role, Dr. Kathula outlined a bold and forward-looking vision for the organization. “As the Chair of the Indo-American Press Club, I will champion ethical, evidence-based journalism, strengthen Indo–U.S. narratives, and elevate health and science reporting,” he said. Emphasizing modernization and broader engagement, he added, “My focus is on building bridges across cultures, modernizing our digital presence, and expanding our influence beyond ethnic media. With unity, integrity, and responsible innovation at the core, I aim to create a lasting legacy that empowers journalists, informs communities, and positions the Club as a trusted voice of impact.”

Reflecting on the challenges facing media professionals today, Dr. Kathula noted, “These are unprecedented times, especially for journalists and the media, when the very freedom of expression is at risk. At IAPC, we envisage our vision through collective efforts and advocacy activities through our nearly one thousand members across the U.S. and Canada, by being a link between the media fraternity and the world at large.”

Ginsmon Zachariah, Founding Chair of the IAPC Board of Directors, highlighted the broader mission of the organization. “Our homeland India is known to have a vibrant, active, and free media, which plays a vital role in the functioning of the world’s largest democracy,” he said. “As members of the media in our adopted land, we recognize our responsibility to be a source of effective communication. We have a role to play in shaping a just and equitable world where everyone enjoys freedom and liberty.”

Providing historical context, Ajay Ghosh, Founding President of IAPC, reflected on the organization’s origins. “We as individuals and corporations representing print, visual, electronic, and online media realized that we had a greater role to play,” he said. “For decades, many of us stood alone in a vast media landscape, our voices often drowned out. IAPC was formed to fill this vacuum—a common platform to raise our collective voice, pool our talents, and respond cohesively to the challenges of the modern world.”

A graduate of Siddhartha Medical College in Vijayawada, Andhra Pradesh, Dr. Kathula currently serves as a clinical professor of medicine at Wright State University’s Boonshoft School of Medicine in Dayton, Ohio. Dr. Kathula completed a Global Healthcare Leaders Program at Harvard University.He also holds a certificate in Artificial Intelligence in Healthcare from Stanford University and is a Diplomate of the American Board of Lifestyle Medicine.

He has authored several medical papers and published a book “Immigrant Doctors: Chasing the Big American Dream” highlighting the contribution of immigrant doctors, their struggles and triumphs. It is Amazon’s best selling. He embarked on his second book on cancer awareness for general public.

Dr. Kathula’s professional achievements extend far beyond medicine. Dr. Kathula’s commitment to community service is equally noteworthy. He has led bone marrow donor drives to address the severe shortage of South Asian donors and was named “Man of the Year – 2018” by the Leukemia and Lymphoma Society for raising funds to help fund the research to find newer treatments and cures for blood cancers.

His commitment to community service is equally noteworthy. His philanthropic work in India includes establishing the Pathfinder Institute of Pharmacy and Educational Research (PIPER) in Warangal, Telangana, which has already graduated more than 1,000 students. He has also supported medical camps and donated essential infrastructure—including a defibrillator, water purification system, CPR center and library—to his native community.He has also supported medical camps and donated essential infrastructure—including a defibrillator, water purification system, and library—to his native community.

Dr. Kathula has served AAPI in numerous leadership roles, including Regional Director, Trustee, Treasurer, Secretary, Vice President, and President-Elect before assuming the presidency in July 2024.

Dr. Kathula has received numerous honors, including the U.S. Presidential Lifetime Achievement Award. In December 2024, he was honored with the Inspirational Award by the Raising Awareness of Youth with Autism (RAYWA) Foundation at a gala held at New York’s iconic Pierre Hotel. In May 2025, IAPC itself bestowed upon him its Lifetime Achievement Award.

Founded in 2013, the Indo-American Press Club continues to serve as a unifying platform for journalists of Indian origin, fostering collaboration, professionalism, and a commitment to the public good. More information is available at www.indoamericanpressclub.com..

Tiny Autonomous Robots Achieve Independent Swimming Capability

Researchers have developed the smallest fully programmable autonomous robots capable of swimming, potentially transforming medicine and healthcare.

For decades, the concept of microscopic robots has largely existed in the realm of science fiction. Films like “Fantastic Voyage” fueled our imaginations, suggesting that tiny machines could one day navigate the human body to repair ailments from within. However, this vision remained elusive, primarily due to the constraints imposed by physics.

Now, a significant breakthrough from researchers at the University of Pennsylvania and the University of Michigan has altered this narrative. The teams have successfully created the smallest fully programmable autonomous robots to date, and these innovative machines can swim.

Measuring approximately 200 by 300 by 50 micrometers, these robots are smaller than a grain of salt and comparable in size to a single-celled organism. Unlike traditional robots that rely on legs or propellers for movement, these microscopic machines utilize electrokinetics. Each robot generates a small electrical field that attracts charged ions in the surrounding fluid, effectively creating a current that propels the robot forward without any moving parts. This design not only enhances durability but also simplifies handling with delicate laboratory tools.

Each robot is powered by tiny solar cells that produce just 75 nanowatts of energy—over 100,000 times less than what a smartwatch consumes. To achieve this level of efficiency, engineers had to redesign various components, including ultra-low voltage circuits and a custom instruction set that condenses complex behaviors into a few hundred bits of memory. Despite these limitations, each robot is capable of sensing its environment, storing data, and making decisions about its next movements.

Due to their size, the robots cannot accommodate antennas. Instead, the research team drew inspiration from nature, enabling each robot to perform a specific wiggle pattern to convey information, such as temperature. This motion follows a precise encoding scheme that researchers can interpret by observing the robots under a microscope. This method of communication is reminiscent of how bees convey messages through movement. Programming the robots is equally innovative; researchers use light signals that the robots interpret as instructions, with a built-in passcode to prevent interference from random light sources.

In current experiments, the robots exhibit thermotaxis, meaning they can sense heat and swim autonomously toward warmer areas. This capability suggests promising future applications, such as tracking inflammation, identifying disease markers, or delivering drugs with pinpoint accuracy. While light can already power these robots near the skin, researchers are also investigating ultrasound as a potential energy source for deeper environments.

Thanks to their construction using standard semiconductor manufacturing techniques, these robots can be produced en masse. More than 100 robots can fit on a single chip, and manufacturing yields have already surpassed 50%. In large-scale production, the estimated cost could drop below one cent per robot, making the concept of disposable robot swarms a tangible reality.

This technology is not merely about creating flashy gadgets; it represents a significant advancement in scalability. Robots of this size could one day monitor health at the cellular level, construct materials from the ground up, or explore environments that are too fragile for larger machines. Although practical medical applications are still years away, this breakthrough indicates that true autonomy at the microscale is finally within reach.

For nearly half a century, the promise of microscopic robots has felt like a dream that science could never fully realize. However, this research, published in Science Robotics, marks a pivotal shift. By embracing the unique physics of the microscale rather than resisting it, engineers have unlocked an entirely new class of machines. This is just the beginning, but it represents a significant leap forward. As sensing, movement, and decision-making capabilities are integrated into these nearly invisible robots, the future of robotics is poised to look remarkably different.

As we consider the potential of tiny robots swimming through our bodies, the question arises: would we trust them to monitor our health or deliver treatment? This inquiry invites further exploration into the future of healthcare technology.

According to Science Robotics, the implications of this research could extend far beyond initial expectations, paving the way for revolutionary advancements in medical science.

Google Uses AI to Decode Dolphin Communication

Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate human interaction with these intelligent marine mammals.

Google is embarking on an innovative project that harnesses artificial intelligence (AI) to explore the complexities of dolphin communication, with the ultimate aspiration of enabling humans to converse with these remarkable creatures.

Dolphins are widely recognized as some of the most intelligent animals on the planet, celebrated for their emotional depth and social interactions with humans. In collaboration with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP)—a Florida-based non-profit dedicated to studying dolphin sounds for over four decades—Google is developing a new AI model named DolphinGemma.

The Wild Dolphin Project has spent years correlating various dolphin sounds with specific behavioral contexts. For example, signature whistles are commonly used by mothers to locate their calves, while burst pulse “squawks” are often associated with aggressive encounters among dolphins. Additionally, “click” sounds are frequently observed during courtship or when dolphins are pursuing sharks.

Utilizing the extensive data collected by WDP, Google has constructed DolphinGemma, which builds upon its existing lightweight AI model known as Gemma. This new model is designed to analyze a vast library of dolphin recordings, identifying patterns, structures, and potential meanings behind the vocalizations of these marine mammals.

Over time, DolphinGemma aims to categorize dolphin sounds into distinct groups—similar to words, sentences, or expressions in human language. According to a blog post from Google, “By identifying recurring sound patterns, clusters, and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins’ natural communication—a task previously requiring immense human effort.”

The project envisions that these identified patterns, combined with synthetic sounds created by researchers to represent objects that dolphins enjoy interacting with, may eventually lead to the establishment of a shared vocabulary for interactive communication between humans and dolphins.

DolphinGemma employs audio recording technology from Google’s Pixel phones to capture high-quality sound recordings of dolphin vocalizations. This technology is adept at isolating dolphin clicks and whistles from background noise, such as waves, boat engines, or underwater static. Clean audio is essential for AI models like DolphinGemma, as noisy data can hinder the AI’s ability to learn effectively.

Google plans to release DolphinGemma as an open model this summer, making it accessible for researchers worldwide to utilize and adapt for their own studies. Although the model has been primarily trained on Atlantic spotted dolphins, researchers believe it could also be fine-tuned to study other species, such as bottlenose or spinner dolphins.

In a statement, Google expressed its hope that by providing tools like DolphinGemma, researchers globally will be empowered to analyze their own acoustic datasets, accelerate the search for patterns, and collectively enhance our understanding of these intelligent marine mammals.

As this groundbreaking project unfolds, the potential for deeper human-dolphin communication may soon become a reality, opening new avenues for interaction with one of the ocean’s most fascinating inhabitants, according to Fox News.

AI Robot Provides Emotional Support for Pets

Aura, an AI-powered pet robot by Tuya Smart, aims to enhance emotional care for pets by tracking their behavior and providing real-time interaction.

Tuya Smart has unveiled Aura, its first AI-powered companion robot designed specifically for household pets, including cats and dogs. This innovative device utilizes artificial intelligence to recognize pet behaviors, movements, and vocal cues, addressing a growing need for emotional engagement in pet care.

The concept behind Aura is straightforward: pets require more than just food and surveillance; they need attention, interaction, and reassurance. Aura actively monitors pets at home, observing behavioral changes and responding in real time, which helps owners gain insights into their pets’ emotional states. Many pets experience stress or anxiety when left alone for extended periods, with subtle signs often emerging first. For instance, a dog may stop playing, while a cat might hide or groom excessively. Aura steps in during these quiet moments, providing engagement and companionship rather than leaving pets in an empty room.

While traditional smart feeders and pet cameras cover basic needs, emotional care presents a different challenge. Pets are inherently social creatures, and their moods can shift rapidly with changes in routine. Aura tracks behavior and listens for variations in sound patterns, allowing it to discern whether a pet is feeling excited, anxious, lonely, or relaxed. This information is relayed to the owner’s smartphone in real time, enabling early detection of potential issues.

Aura functions more like a companion than a stationary device. It employs multiple systems throughout the day to keep pets engaged. Rather than waiting for a button press, Aura proactively seeks opportunities for interaction, transforming long, quiet hours into moments of play and stimulation. Additionally, it captures everyday highlights—such as playful bursts, calm naps, and amusing interactions—using AI pet recognition and intelligent tracking. These moments can be automatically compiled into short videos, allowing owners to stay connected with their pets even when they are away. This feature also makes it easier to document and share special moments with family or on social media.

Movement is a key aspect of Aura’s functionality. Equipped with V-SLAM navigation, binocular vision, and AIVI object recognition, Aura can navigate freely around the home while avoiding obstacles. When its battery runs low, it autonomously returns to its charging dock, ensuring it remains ready for action without requiring constant attention from owners.

Aura is designed to integrate with Tuya’s broader ecosystem, which offers services beyond basic pet care. These services include smart pet boarding, health and medical care, behavior training, grooming, customization, and community tools. Rather than focusing on a single task, Aura serves as a central hub for comprehensive pet care that can evolve over time.

While Aura currently targets pet care, the underlying technology has broader implications. The principles of emotional awareness, proactive assistance, and ecosystem integration could also be applied to elder care, home monitoring, and family connectivity. By starting with pets, Tuya establishes a clear emotional use case while laying the groundwork for future advancements in home robotics.

Despite the excitement surrounding Aura, Tuya has yet to announce a release date or pricing details. The company introduced the robot earlier this month at CES 2026, but specifics regarding availability and cost remain unclear. These details are expected to emerge as the company approaches a wider consumer launch.

Aura represents a significant shift in how smart home technology interacts with pets, moving beyond simple monitoring to embrace interaction and emotional awareness. If Aura fulfills its promise, it could provide pet owners with greater peace of mind when leaving their pets home alone, while maintaining a connection throughout the day.

As technology advances to interpret and respond to pet emotions in real time, it raises questions about the role of such devices in our daily routines. Would you trust an AI companion to become part of your pet care regimen, or would that feel like an overstep? Share your thoughts with us at Cyberguy.com.

According to CyberGuy, the future of pet care is evolving with technology that prioritizes emotional well-being.

Google Fast Pair Vulnerability Allows Hackers to Take Control of Headphones

Google has responded to serious security flaws in its Fast Pair technology, which could allow hackers to hijack Bluetooth headphones and other devices, by issuing patches and updating certification requirements.

Google’s Fast Pair technology, designed to simplify Bluetooth connections, is facing significant security vulnerabilities that could allow unauthorized access to headphones, earbuds, and speakers. Researchers from KU Leuven have identified these flaws, which they have dubbed “WhisperPair.” This method enables nearby attackers to connect to devices without the owner’s knowledge, raising serious privacy concerns.

One of the most alarming aspects of this vulnerability is that it affects not only Android users but also iPhone users. Fast Pair operates by broadcasting a device’s identity to nearby phones and computers, facilitating quick connections. However, the researchers discovered that many devices fail to enforce a critical rule: they continue to accept new pairings even when already connected. This oversight creates an opportunity for malicious actors.

Within Bluetooth range, an attacker can silently pair with a device in approximately 10 to 15 seconds. Once connected, they can disrupt calls, inject audio, or even activate the device’s microphone. Notably, this attack can be executed using standard devices such as smartphones, laptops, or low-cost hardware like Raspberry Pi, allowing the attacker to effectively assume control of the device.

The researchers tested 17 Fast Pair-compatible devices from well-known brands, including Sony, Jabra, JBL, Marshall, Xiaomi, Nothing, OnePlus, Soundcore, Logitech, and Google. Alarmingly, most of these products had passed Google’s certification testing, raising concerns about the efficacy of the security checks in place.

Some affected models pose an even greater privacy risk. Certain Google and Sony devices integrate with Find Hub, a feature that uses nearby devices to estimate location. If an attacker connects to a headset that has never been linked to a Google account, they can continuously track the user’s movements. If the victim later receives a tracking alert, it may appear to reference their own device, making it easy to dismiss as an error.

Another issue that many users may overlook is the necessity of firmware updates for headphones and speakers. These updates typically come through brand-specific apps that many users do not install. Consequently, vulnerable devices could remain exposed for extended periods if users do not take action.

The only way to mitigate this vulnerability is by installing a software update provided by the device manufacturer. While many companies have already released patches, updates may not yet be available for every affected model. Users are advised to check directly with their manufacturers to confirm whether a security update exists for their specific device.

Importantly, the flaw does not lie within Bluetooth itself but rather within the convenience layer built on top of it. Fast Pair prioritized speed over strict ownership enforcement, which researchers argue should require cryptographic proof of ownership. Without such measures, convenience features can become potential attack surfaces. Security and ease of use can coexist, but they must be designed in tandem.

In response to these vulnerabilities, Google has been collaborating with researchers to address the WhisperPair flaws. The company began distributing recommended patches to headphone manufacturers in early September and confirmed that its own Pixel headphones have been updated.

A Google spokesperson stated, “We appreciate collaborating with security researchers through our Vulnerability Rewards Program, which helps keep our users safe. We worked with these researchers to fix these vulnerabilities, and we have not seen evidence of any exploitation outside of this report’s lab setting. As a best security practice, we recommend users check their headphones for the latest firmware updates. We are constantly evaluating and enhancing Fast Pair and Find Hub security.”

Google has indicated that the core issue stemmed from some accessory manufacturers not fully adhering to the Fast Pair specification, which requires devices to accept pairing requests only when a user has intentionally placed the device into pairing mode. Failures to enforce this rule contributed to the audio and microphone risks identified by researchers.

To mitigate future risks, Google has updated its Fast Pair Validator and certification requirements to explicitly test whether devices properly enforce pairing mode checks. The company has also provided accessory partners with fixes intended to resolve all related issues once applied.

On the location tracking front, Google has implemented a server-side fix that prevents accessories from being silently enrolled into the Find Hub network if they have never been paired with an Android device. This change addresses the tracking risk across all devices, including Google’s own accessories.

Despite these efforts, researchers have expressed concerns about the speed at which patches reach users and the extent of Google’s visibility into real-world exploitation that does not involve Google hardware. They argue that weaknesses in certification allowed flawed implementations to reach the market at scale, indicating broader systemic issues.

For now, both Google and the researchers agree on one crucial point: users must install manufacturer firmware updates to ensure protection, and the availability of these updates may vary by device and brand.

While users cannot entirely disable Fast Pair, they can take steps to reduce their exposure. If you use a Bluetooth accessory that supports Google Fast Pair, including wireless earbuds, headphones, or speakers, you may be affected. Researchers have developed a public lookup tool that allows users to check whether their specific device model is vulnerable. This tool can be accessed at whisperpair.eu/vulnerable-devices.

To enhance security, users are encouraged to install the official app from their headphone or speaker manufacturer, check for firmware updates, and apply them promptly. Pairing new devices in private spaces and being cautious of unexpected audio interruptions or strange sounds can also help mitigate risks. A factory reset can remove unauthorized pairings, but it does not resolve the underlying vulnerability; a firmware update is still necessary.

Bluetooth should only be active during use, and turning it off when not in use can limit exposure, although it does not eliminate the risk if the device remains unpatched. Always factory reset used headphones or speakers before pairing them to remove hidden links and account associations. Additionally, promptly installing operating system updates can block exploit paths even when accessory updates lag behind.

The WhisperPair vulnerabilities highlight how small conveniences can lead to significant privacy failures. While headphones may seem innocuous, they contain microphones, radios, and software that require regular attention and updates. Neglecting these devices can create blind spots that attackers are eager to exploit. Staying secure now necessitates a proactive approach to devices that users may have previously taken for granted.

For further information and updates, users can refer to CyberGuy.

Smart Pill Technology Confirms When Medication Is Swallowed

The Massachusetts Institute of Technology has developed a smart pill that confirms medication ingestion, potentially improving patient adherence and health outcomes while safely breaking down in the body.

Engineers at the Massachusetts Institute of Technology (MIT) have designed an innovative smart pill that confirms when a patient has swallowed their medication. This advancement aims to enhance treatment tracking for healthcare providers and help patients adhere to their medication schedules, ultimately reducing the risk of missed doses that can jeopardize health.

The smart pill incorporates a tiny, biodegradable radio-frequency antenna made from zinc and cellulose, materials that are already established as safe for medical use. This system fits within existing pill capsules and operates by emitting a signal that can be detected by an external receiver, potentially integrated into a wearable device, from a distance of up to two feet.

This entire process occurs within approximately ten minutes after ingestion. Unlike previous smart pill designs that utilized components that remained intact throughout the digestive system, raising concerns about long-term safety, the MIT team has taken a different approach. Most parts of the antenna decompose in the stomach within days, leaving only a small off-the-shelf RF chip that naturally passes through the body.

Lead researcher Mehmet Girayhan Say emphasized the goal of the project: to provide a reliable confirmation of medication ingestion without the risk of long-term buildup in the body.

This smart pill is not intended for every type of medication but is specifically designed for situations where missing a dose can have serious consequences. Potential beneficiaries include patients who have undergone organ transplants, those managing tuberculosis, and individuals with complex neurological conditions. For these patients, adherence to prescribed medication can be the difference between recovery and severe complications.

Senior author Giovanni Traverso highlighted that the primary focus of this technology is on patient health. The aim is to support individuals rather than monitor them. The research team has published its findings in the journal Nature Communications and is planning further preclinical testing, with human trials expected to follow as the technology progresses toward real-world application.

This research has received funding from several sources, including Novo Nordisk, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital Division of Gastroenterology, and the U.S. Advanced Research Projects Agency for Health.

Missed medication doses contribute to hundreds of thousands of preventable deaths annually and add billions of dollars to healthcare costs. This issue is particularly critical for patients who require consistent treatment over extended periods. For individuals in vulnerable health situations, such as organ transplant recipients or those with chronic illnesses, the implications of missed doses can be life-altering.

While the smart pill technology is still in development, it offers the potential to provide an additional layer of safety for patients relying on critical medications. It could alleviate some of the pressures faced by patients managing complex treatment plans and reduce uncertainty for healthcare providers regarding patient adherence.

However, the introduction of such technology also raises important questions about privacy, consent, and the sharing of medical data. Any future implementation will need robust safeguards to protect patient information.

For those awaiting the availability of this technology, there are still effective ways to stay on track with medication regimens. Utilizing built-in tools on smartphones can help individuals manage their medication schedules effectively.

The concept of a pill that confirms ingestion may seem futuristic, but it addresses a pressing issue in healthcare. By combining simple materials with innovative engineering, MIT researchers have created a tool that could potentially save lives without leaving harmful residues in the body. As testing continues, this approach could significantly reshape the monitoring and delivery of medical treatments.

Would you be comfortable taking a pill that reports when you swallow it if it meant better health outcomes? Share your thoughts with us at Cyberguy.com.

According to MIT, this groundbreaking technology could transform medication adherence and patient care.

Potential Discovery of New Dwarf Planet Challenges Planet Nine Theory

The potential discovery of a new dwarf planet, 2017OF201, may provide fresh insights into the elusive Planet Nine theory and the structure of the Kuiper Belt.

A team of scientists at the Institute for Advanced Study’s School of Natural Sciences in Princeton, New Jersey, has announced the potential discovery of a new dwarf planet, which could lend support to the theory of a theoretical super-planet known as Planet Nine.

The object, designated 2017OF201, is classified as a trans-Neptune object (TNO), which refers to minor planets that orbit the Sun at distances greater than that of Neptune. Located on the fringes of our solar system, 2017OF201 stands out due to its significant size and unusual orbital characteristics.

Led by researchers Sihao Cheng, Jiaxuan Li, and Eritas Yang from Princeton University, the team utilized advanced computational methods to track the object’s distinctive trajectory in the night sky. Cheng noted that the aphelion, or the farthest point in the orbit from the Sun, of 2017OF201 is more than 1,600 times that of Earth’s orbit. In contrast, its perihelion, the closest point to the Sun, is 44.5 times that of Earth’s orbit, a pattern reminiscent of Pluto’s orbit.

2017OF201 takes approximately 25,000 years to complete a single orbit around the Sun. Yang suggested that the object likely experienced close encounters with a giant planet, which may have resulted in its ejection to a wide orbit. Cheng elaborated on this idea, proposing that the object might have initially been expelled to the Oort Cloud, the most distant region of our solar system, before being drawn back toward the Sun.

This discovery has important implications for our understanding of the outer solar system’s structure. In January 2016, astronomers Konstantin Batygin and Mike Brown from the California Institute of Technology (Caltech) presented research suggesting the existence of a planet approximately 1.5 times the size of Earth, located in the outer solar system. However, the existence of this so-called Planet Nine remains theoretical, as neither Batygin nor Brown has directly observed the planet.

According to the theory, Planet Nine is thought to be roughly the size of Neptune and located far beyond Pluto, in the vicinity of the Kuiper Belt, where 2017OF201 was discovered. If it exists, Planet Nine could possess a mass up to ten times that of Earth and orbit the Sun from a distance up to 30 times greater than that of Neptune. It is estimated that this hypothetical planet would take between 10,000 and 20,000 Earth years to complete one full orbit around the Sun.

Previously, the region beyond the Kuiper Belt was believed to be largely empty. However, the discovery of 2017OF201 suggests that this area may be more populated than previously thought. Cheng remarked that only about 1% of 2017OF201’s orbit is currently visible to astronomers.

“Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system,” Cheng stated in the announcement.

Nasa has indicated that if Planet Nine does exist, it could help explain the peculiar orbits of certain smaller objects within the distant Kuiper Belt. As it stands, the existence of Planet Nine remains largely theoretical, with its potential presence inferred from gravitational patterns observed in the outer solar system.

This latest discovery underscores the ongoing quest to understand the complexities of our solar system and the potential for finding new celestial bodies that may reshape our understanding of its structure.

According to Fox News, the implications of 2017OF201’s discovery could be significant for future research into the outer solar system.

Meta Limits Teen Access to AI Characters for Safety Reasons

Meta Platforms will temporarily restrict access to AI characters for teenagers as it develops a new, age-appropriate version that includes parental controls and adheres to PG-13 content guidelines.

Meta Platforms announced on Friday that it will suspend access to its AI characters for teenagers across all its applications globally. This decision comes as the company works on a revised version of the feature tailored specifically for younger users.

The initiative reflects Meta’s commitment to refining the interaction between its AI products and teenage users amid increasing scrutiny regarding safety, age-appropriate design, and the implications of generative AI on social media platforms.

“Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready,” Meta stated.

Once the revamped AI characters are launched, they will incorporate parental controls, allowing families greater oversight of how younger users engage with the technology. This move follows a preview of these controls released in October, where Meta indicated that parents would have the option to disable private chats between their teens and AI characters. This response was prompted by growing concerns over reports of flirtatious interactions between chatbots and minors on its platforms.

Despite the announcement, Meta clarified that these parental controls are not yet operational. Additionally, the company has committed to ensuring that its AI experiences for teenagers adhere to the PG-13 movie rating framework, aiming to restrict exposure to content considered inappropriate for minors.

The changes come at a time when U.S. regulators are intensifying their examination of AI companies and the potential risks associated with chatbots. In August, reports indicated that Meta’s internal AI guidelines had permitted provocative conversations involving minors, further amplifying the pressure on the company to enhance its safety measures.

As the landscape of AI technology continues to evolve, Meta’s proactive approach aims to address the concerns of parents and regulators alike, ensuring a safer online environment for younger users.

The post Meta to block teen access to AI characters appeared first on The American Bazaar, according to The American Bazaar.

Ransomware Attack Exposes Social Security Numbers at Major Gas Station Chain

A recent ransomware attack on a Texas gas station chain has exposed the personal information of over 377,000 individuals, raising concerns about data security in the retail sector.

A ransomware attack on a Texas-based gas station chain has resulted in the exposure of sensitive personal data for more than 377,000 individuals, including Social Security numbers and driver’s license information. This incident underscores the vulnerabilities that exist in industries that handle large volumes of personal data but may lack robust cybersecurity measures.

The breach was reported by Gulshan Management Services, Inc., which is affiliated with Gulshan Enterprises, the operator of approximately 150 Handi Plus and Handi Stop gas stations and convenience stores throughout Texas. According to a disclosure filed with the Maine Attorney General’s Office, the company detected unauthorized access to its IT systems in late September.

Investigators later discovered that the attackers had infiltrated the network for about ten days before the breach was identified. The intrusion began with a phishing attack, highlighting the risks associated with deceptive emails that can lead to significant data breaches.

During this period, the attackers accessed and stole a range of personal information, subsequently deploying ransomware that encrypted files across Gulshan’s systems. The compromised data includes names, contact details, Social Security numbers, and driver’s license numbers, all of which pose serious risks for identity theft and fraud that may manifest long after the breach.

As of now, no ransomware group has publicly claimed responsibility for the attack. While this may seem like a silver lining, it does not alleviate the risks for those affected. In many ransomware incidents, the absence of a claim can indicate that the attackers have not yet released the stolen data publicly or that the victim company has resolved the situation privately.

Gulshan’s filing indicates that the company restored its systems using known-safe backups, suggesting that it opted to rebuild rather than negotiate with the attackers. However, once sensitive data has been extracted from a network, it cannot be retracted, leaving affected individuals at risk regardless of whether the stolen information appears online.

This incident highlights a recurring issue within the retail and service sectors, where businesses often rely on outdated systems and employees who may be vulnerable to phishing attacks. Although gas stations may not seem like obvious targets for cybercriminals, their payment systems, loyalty programs, and human resources databases make them attractive for data breaches.

In light of this breach, individuals whose information may have been compromised should take proactive steps to mitigate potential fallout. If the company offers free credit monitoring or identity protection services, it is advisable to enroll in those programs. Such services can provide early alerts if someone attempts to open accounts or misuse personal information.

If no such services are offered, individuals should consider signing up for a reputable identity theft protection service independently. These services can monitor personal information, such as Social Security numbers and email addresses, and alert users if their data is being sold on the dark web or used to open accounts fraudulently.

Additionally, employing a password manager can help create and store unique passwords for each account, further securing personal information against unauthorized access. Users should also check if their email addresses have been involved in past data breaches and change any reused passwords immediately if they find a match.

Implementing two-factor authentication (2FA) adds another layer of security, particularly for email, banking, and shopping accounts, which are often primary targets for cybercriminals. Furthermore, maintaining strong antivirus software can help detect phishing attempts and suspicious activity before they escalate into significant breaches.

After incidents like this, scammers frequently exploit the situation by sending fake emails or texts impersonating the affected company or credit monitoring services. It is crucial to verify any messages independently and avoid clicking on unexpected links.

Individuals should regularly review their credit reports from major bureaus for unfamiliar accounts or inquiries. They are entitled to free reports, and early detection of issues can facilitate easier resolutions.

If a Social Security number has been compromised, placing a credit freeze can prevent lenders from opening new accounts in the victim’s name, even if they possess personal details. Credit bureaus provide this service at no charge, and it can be temporarily lifted when applying for credit. Alternatively, individuals may opt for a fraud alert, which requires lenders to verify identity before approving credit.

Moreover, when Social Security numbers are stolen, tax fraud often follows, as criminals can file fake tax returns to claim refunds. An IRS Identity Protection PIN (IP PIN) can help prevent this by ensuring that only the rightful owner can file a tax return using their SSN.

It is essential to not only monitor for new fraud but also to secure existing accounts. Setting up alerts for large transactions or changes to contact information can help detect unauthorized activity early. If personal information has been compromised, contacting banks for additional protections is advisable.

This incident serves as a stark reminder that personal data is not only held by banks and healthcare providers but also by retailers and service operators. As cybercriminals exploit vulnerabilities through simple phishing emails, the potential for widespread damage increases significantly. While individuals cannot prevent such breaches, they can take steps to limit the impact of stolen data by securing their accounts and remaining vigilant.

For more information on how to protect yourself from identity theft and data breaches, visit Cyberguy.com.

Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a face-mounted electronic tattoo, or “e-tattoo,” to monitor mental workload in high-stress professions, utilizing EEG and EOG technology for brain activity analysis.

Scientists have introduced an innovative solution designed to help individuals in high-pressure work environments monitor their cognitive performance. This new device, known as an electronic tattoo or “e-tattoo,” is applied to the forehead and is intended to track brainwaves and mental workload.

A study published in the journal Device outlines the advantages of e-tattoos as a cost-effective and user-friendly method for assessing mental workload. Dr. Nanshu Lu, the senior author of the research from the University of Texas at Austin, emphasized that mental workload is a critical component in human-in-the-loop systems, significantly affecting cognitive performance and decision-making.

In an email to Fox News Digital, Dr. Lu noted that the motivation behind this device stems from the needs of professionals in high-demand, high-stakes jobs, including pilots, air traffic controllers, doctors, and emergency dispatchers. The technology could also benefit emergency room doctors and operators of robots or drones, enhancing both training and performance.

One of the primary objectives of the study was to develop a method for measuring cognitive fatigue in roles that require intense mental focus. The e-tattoo is designed to be temporarily affixed to the forehead and is notably smaller than existing devices on the market.

The device operates by employing electroencephalogram (EEG) and electrooculogram (EOG) technologies to monitor brain waves and eye movements. Traditional EEG and EOG machines tend to be bulky and expensive; however, the e-tattoo presents a compact and affordable alternative.

Dr. Lu explained, “We propose a wireless forehead EEG and EOG sensor designed to be as thin and conformable to the skin as a temporary tattoo sticker, which is referred to as a forehead e-tattoo.” She further noted that understanding human mental workload is essential in the fields of human-machine interaction and ergonomics due to its direct impact on cognitive performance.

The study involved six participants who were tasked with identifying letters displayed on a screen. The letters appeared one at a time in various locations, and participants were instructed to click a mouse if either the letter or its position matched one shown previously. Each participant completed the task multiple times, with varying levels of difficulty.

The researchers observed that as the tasks increased in complexity, the brainwave patterns detected by the e-tattoo indicated a corresponding rise in mental workload. The device comprises a battery pack, reusable chips, and a disposable sensor, making it both practical and efficient for use in cognitive assessments.

Currently, the e-tattoo exists as a laboratory prototype. Dr. Lu mentioned that further development is necessary before it can be commercialized, including the implementation of real-time mental workload decoding and validation in more realistic settings. The prototype is estimated to cost around $200.

This groundbreaking research highlights the potential for e-tattoos to revolutionize how professionals in high-stress jobs monitor their cognitive health and performance, paving the way for advancements in training and operational efficiency.

According to Fox News, the development of this technology could significantly impact various fields by providing a more accessible means of tracking mental workload and cognitive fatigue.

Web Skimming Attacks Target Major Payment Networks and Consumers

Researchers are tracking a persistent web skimming campaign that targets major payment networks, using malicious JavaScript to steal credit card information from unsuspecting online shoppers.

As online shopping becomes increasingly familiar and convenient, a hidden threat lurks beneath the surface. Researchers are monitoring a long-running web skimming campaign that specifically targets businesses connected to major payment networks. This technique enables criminals to secretly insert malicious code into checkout pages, allowing them to capture payment details as customers enter them. Often, these attacks operate unnoticed within the browser, leaving victims unaware until unauthorized charges appear on their statements.

The term “Magecart” refers to various groups that specialize in web skimming attacks. These attacks primarily focus on online stores where customers input payment information during the checkout process. Rather than directly hacking banks or card networks, attackers embed malicious code into a retailer’s checkout page. This code, typically written in JavaScript, is a standard programming language used to enhance website interactivity, such as managing forms and processing payments.

In Magecart attacks, criminals exploit this same JavaScript to covertly capture card numbers, expiration dates, security codes, and billing details as shoppers input their information. The checkout process continues to function normally, providing no immediate warning signs to users. Initially, Magecart referred specifically to attacks on Magento-based online stores, but the term has since expanded to encompass web skimming campaigns across various e-commerce platforms and payment systems.

Researchers indicate that this ongoing campaign targets merchants linked to several major payment networks. Large enterprises that depend on these payment providers face heightened risks due to their complex websites and reliance on third-party integrations. Attackers typically exploit overlooked vulnerabilities, such as outdated plugins, vulnerable third-party scripts, and unpatched content management systems. Once they gain access, they inject JavaScript directly into the checkout flow, allowing the skimmer to monitor form fields associated with card data and personal information. This data is then quietly transmitted to servers controlled by the attackers.

To evade detection, the malicious JavaScript is often heavily obfuscated. Some variants can even remove themselves if they detect an admin session, creating a false impression of a clean inspection. Researchers have also noted that the campaign utilizes bulletproof hosting services, which ignore abuse reports and takedown requests, providing attackers with a stable environment to operate. Because web skimmers function within the browser, they can circumvent many server-side fraud controls employed by merchants and payment providers.

Magecart campaigns simultaneously impact three groups: the online retailers, the customers, and the payment networks. This shared vulnerability complicates detection and response efforts.

While consumers cannot rectify compromised checkout pages, adopting a few smart habits can help mitigate exposure, limit the misuse of stolen data, and facilitate quicker detection of fraud. One effective strategy is to use virtual and single-use cards, which are digital card numbers linked to a real credit or debit account without revealing the actual number. These cards function like standard cards during checkout but provide an additional layer of security. Many people can access these services through their existing banking apps or mobile wallets, such as Apple Pay and Google Pay, which generate temporary card numbers for online transactions.

A single-use card typically works for one purchase or expires shortly after use, while a virtual card can remain active for a specific merchant and be paused or deleted later. If a web skimming attack captures one of these numbers, attackers are generally unable to reuse it elsewhere, significantly limiting financial damage and making it easier to halt fraud.

Transaction alerts can notify users the moment their card is used, even for minor purchases. If web skimming leads to fraudulent activity, these alerts can quickly reveal unauthorized charges, allowing cardholders to freeze their accounts before losses escalate. For instance, a small test charge of $2 could indicate fraud before larger transactions occur.

Using strong, unique passwords for banking and card portals can also reduce the risk of account takeovers. A password manager can assist in generating and securely storing these credentials. Additionally, individuals should check if their email addresses have been compromised in past data breaches. Many password managers include built-in breach scanners that alert users if their information appears in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Robust antivirus software can block connections to malicious domains used to collect skimmed data and alert users about unsafe websites. This protection is essential for safeguarding personal information and digital assets from potential threats, including phishing emails and ransomware scams.

Data removal services can also help minimize the amount of personal information exposed online, making it more challenging for criminals to match stolen card data with complete identity details. While no service can guarantee complete data removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and reducing the risk of targeted attacks.

Regularly reviewing financial statements, even for small charges, is another prudent practice, as attackers often test stolen cards with low-value transactions. The Magecart web skimming campaign illustrates how attackers can exploit trusted checkout pages without disrupting the shopping experience. Although consumers cannot fix compromised sites, implementing simple safeguards can help reduce risk and facilitate early detection of fraud. Online payments rely on trust, but this campaign underscores the importance of pairing that trust with caution.

As awareness of web skimming grows, consumers may find themselves reconsidering the safety of online checkout processes. For further information and resources on protecting against these threats, visit CyberGuy.com.

Indian-American CEO Vasudha Badri-Paul Launches AI Accelerator in East Bay

Vasudha Badri-Paul, founder and CEO of Avatara AI, discusses her transition from corporate life to launching an AI accelerator aimed at fostering innovation in California’s East Bay.

Vasudha Badri-Paul, the founder and CEO of Avatara AI, has embarked on an ambitious journey to reshape the landscape of artificial intelligence startups in California’s East Bay. After a lengthy corporate career, she is now focused on building an AI accelerator that aims to nurture the next generation of innovators.

In 2023, Badri-Paul established Avatara AI, a San Francisco-based firm dedicated to helping businesses design and manage AI solutions. She recognized the urgent need for companies to adapt to the rapidly evolving AI landscape. “AI is advancing at such a rapid pace that failing to continuously update your skills can leave you obsolete almost overnight,” she noted.

However, her decision to leave a stable corporate career was also influenced by the Bay Area’s unpredictable hiring environment. “I would say that the job lifespan in the Bay Area is two years, and it’s the same across sectors—corporate, tech, marketing, sales, everywhere,” she explained. With experience at major corporations like Pfizer, Microsoft, GE, Cisco, and Intel, Badri-Paul has witnessed firsthand the constant churn in the job market.

She elaborated on the challenges of this cycle, stating, “There is a constant churn. Reasons range from no funding to restructuring, and people are asked to leave every few years. This recurring cycle in the Bay Area job market that results in redundancies gets tiring after a while. Everyone is watching their back; there is no margin for humanity.”

Frustrated by this instability, Badri-Paul decided to take a bold step: “I took a hard stance and thought of building a company of my own.” As an early innovator in the AI space, she recognized the transformative potential of AI across various sectors. At Avatara, she oversees the development and deployment of AI solutions, focusing on responsible and ethical practices.

In addition to her work at Avatara, Badri-Paul is enthusiastic about the opportunities emerging in the East Bay region. She recently launched the Velocity East Accelerator, which she envisions as a catalyst for the future of AI in the area. “In California, Silicon Valley is where all the tech happens. It is the start-up empire. Despite this boom, some parts of Silicon Valley remain underrepresented, and we have been seeing a shift in the trend,” she stated.

Badri-Paul believes that the East Bay is on the verge of significant growth. “East Bay has kind of taken off,” she remarked. Through Velocity East, she aims to create a hub for innovation and entrepreneurship. As a long-time California resident, she has observed how migration patterns have spurred development in the region. “During Covid, a builder built about 20,000 homes in East Bay. A lot of migration happened during that time,” she noted.

Despite the influx of new residents, Badri-Paul observed a lack of formal support for startups in the area. “While there is a boom in newer residents, there was no formal atmosphere to nurture startups in the area, no Y Combinators—basically no ecosystem to help build ideas,” she explained.

With this vision in mind, she launched Velocity East, an AI accelerator based in San Ramon. Badri-Paul emphasized that the goal of the accelerator is not to replicate existing tech programs but to highlight the potential for groundbreaking AI companies to emerge from the East Bay. “We are talking about areas such as Fremont, Concord, as well as across Alameda and Contra Costa counties,” she said.

Velocity East is powered by The AI Foundry community and aims to accelerate early-stage AI startups through mentorship, resources, and access to capital. Badri-Paul added, “We also build bridges between East Bay innovators and the broader Bay Area ecosystem and create pathways for underrepresented founders to lead in AI.”

Her larger vision is to establish San Ramon and Bishop Ranch as legitimate hubs for AI innovation, shining a spotlight on the East Bay as a vital player in the tech landscape.

As Badri-Paul continues to navigate her entrepreneurial journey, she remains committed to fostering an environment where innovation can thrive, ensuring that the East Bay is recognized as a key contributor to the future of artificial intelligence.

According to The American Bazaar, Badri-Paul’s efforts represent a significant shift in the tech ecosystem, highlighting the importance of nurturing local talent and ideas.

Rising Data Center Growth May Lead to Increased Electricity Costs

A new study reveals that the rapid growth of data centers could significantly increase electricity costs and strain power grids, posing environmental challenges.

A recent study conducted by the Union of Concerned Scientists highlights the potential consequences of the rapid construction of data centers, warning that this surge in demand for electricity could lead to soaring energy costs and environmental harm.

Published on Monday, the report indicates that the pace at which data centers are being built is outstripping the ability of utilities to supply adequate electricity. Mike Jacobs, a senior manager of energy at the organization, emphasized the challenge: “They’re increasing the demand faster than you can increase the supply. How’re you going to do that?”

The report, titled “Data Center Power Play,” models various electricity demand scenarios over the next 25 years, alongside different energy policy approaches to meet these demands. The study aims to estimate the potential costs in terms of electricity, climate impact, and public health, which could amount to trillions of dollars.

Jacobs noted that implementing clean energy policies could mitigate these costs while reducing air pollution and health impacts. He pointed out that the construction of an electric grid capable of meeting the rising demand for power will take significantly longer than building new data centers.

“This is a collision between the people whose philosophy is ‘move fast and break things,’ with the utility industry that has nobody that says move fast and break things,” Jacobs remarked, referring to the rapid expansion of data center facilities. He also mentioned that predicting future demand for data centers is challenging due to limited information from utilities and major tech companies. How this demand is addressed will be crucial for both public health and environmental sustainability.

Jacobs further stated, “This is really a great moment for regulators to do what’s within their authority and sort out and assign the costs to those who cause them, which is an essential principle of utility ratemaking.”

In recent years, tech companies have aggressively expanded their data center operations, driven by the booming demand for artificial intelligence. Major firms such as OpenAI, Google, Meta, and Amazon have made substantial investments in data centers, with projects like Stargate serving as critical infrastructure for AI development.

While the growth of data centers brings job opportunities and digital advancements, it also raises significant concerns regarding their substantial energy and water consumption. Data centers typically rely on water-intensive cooling systems, which can exacerbate existing water scarcity issues.

For instance, a single 100 megawatt (MW) data center can consume over two million liters of water daily, an amount comparable to the daily usage of approximately 6,500 households. This demand is particularly concerning in regions already facing water shortages, such as parts of Georgia, Texas, Arizona, and Oregon, where it places additional stress on aquifers and municipal water supplies.

The findings of this study underscore the urgent need for a balanced approach to energy policy and infrastructure development, ensuring that the growing demands of data centers do not come at the expense of environmental sustainability and public health, according to The Union of Concerned Scientists.

U.S. Supports India-Singapore Submarine Cable Project for Enhanced Connectivity

The U.S. Trade and Development Agency has announced support for a submarine cable project linking India and Singapore, aimed at enhancing connectivity and security in Southeast Asia.

WASHINGTON, DC – On January 20, the U.S. Trade and Development Agency (USTDA) announced its backing for a proposed submarine cable system that will connect India with Singapore and key data hubs across Southeast Asia.

The planned cable route is set to link Chennai, India, with Singapore, while additional landing points are under consideration in Malaysia, Thailand, and Indonesia, according to USTDA.

As part of this initiative, USTDA has signed an agreement with SubConnex Malaysia Sdn. Bhd. to fund a feasibility study for the SCNX3 submarine cable system. This project is expected to serve approximately 1.85 billion people by enhancing digital infrastructure in the region.

The feasibility study aims to attract investment for the cable system and expand the capacity necessary for Artificial Intelligence and cloud-based services. USTDA emphasized that this effort will also help ensure the reliability and security of international networks while minimizing exposure to cyber threats and foreign interference.

The agreement was formalized during the Pacific Telecommunications Council 26 conference held in Honolulu, Hawaii.

SubConnex has appointed Florida-based APTelecom LLC to conduct the feasibility study. The study will encompass various aspects, including route design, engineering, financial modeling, commercialization planning, and regulatory analysis.

The SCNX3 submarine cable is designed to address the increasing connectivity challenges faced by India and Southeast Asia. USTDA noted that the rising demand for digital services, coupled with limited route diversity, has rendered existing networks susceptible to outages and security vulnerabilities.

By introducing new and resilient data pathways, the project is anticipated to enhance digital access and support the growth of Artificial Intelligence and cloud services. USTDA stated that the cable will provide a secure and reliable communications infrastructure for governments, businesses, and citizens throughout South and Southeast Asia.

Furthermore, USTDA highlighted that the feasibility study will promote the use of secure cable technology, safeguarding data flows from potential malicious foreign influences. This concern is increasingly relevant as undersea cables facilitate the majority of global internet and data traffic.

According to IANS, the initiative represents a significant step toward improving digital connectivity in the region.

Dialog Aims to Strengthen Ethical Canada-India AI Collaboration

India and Canada strengthen their partnership in artificial intelligence through the ‘India-Canada AI Dialogue 2026,’ focusing on ethical and inclusive AI development.

TORONTO — The Consulate General of India in Toronto recently hosted the ‘India-Canada AI Dialogue 2026,’ highlighting India’s pivotal role in fostering inclusive, responsible, and impactful artificial intelligence (AI). This event underscored the importance of bilateral cooperation for mutual economic and societal benefits.

Organized in collaboration with the University of Waterloo, the Canada India Tech Council, and Zoho Inc., the dialogue attracted over 600 senior leaders. Participants included C-suite executives, policymakers, and researchers from various sectors, including government, industry, academia, and the innovation ecosystem across Canada. The gathering aimed to enhance collaboration in the field of artificial intelligence.

Dinesh K. Patnaik, the High Commissioner of India to Canada, emphasized the significance of the dialogue, stating, “The India-Canada AI Dialogue 2026 reflects our shared vision for shaping the future of artificial intelligence responsibly. As we build momentum toward the India AI Impact Summit 2026 in New Delhi, this engagement highlights how trusted partners like Canada can collaborate with India to drive innovation that is inclusive, ethical, and globally relevant.”

Canadian Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, addressed the attendees, noting, “AI is no longer an abstract or future-facing conversation — it’s shaping how we work, govern, and relate to one another. What makes the India-Canada AI Dialogue so important is that it puts impact, accountability, and human outcomes at the center of the discussion. India and Canada bring different strengths, but a shared responsibility: to make sure this technology serves people, strengthens societies, and delivers real economic value.”

Doug Ford, the Premier of Ontario, also shared his insights on the dialogue’s significance, stating, “India and Canada share a deep and long-standing partnership, one built on robust trade and investment, people-to-people ties, and research partnerships in emerging technologies such as artificial intelligence.”

The dialogue serves as a platform for both nations to explore innovative solutions in AI while ensuring that ethical considerations remain at the forefront of technological advancements. As the world increasingly relies on AI, the collaboration between India and Canada is poised to set a precedent for responsible AI development globally.

According to IANS, the event marks a significant step in enhancing the Canada-India relationship in the tech sector, particularly in artificial intelligence.

Indian-American Anjeneya Dubey Appointed CTO of Imagine Learning

Anjeneya Dubey, an Indian American cloud and AI leader, has been appointed Chief Technology Officer at Imagine Learning to enhance its AI-driven educational solutions.

Anjeneya Dubey, a prominent Indian American leader in cloud and artificial intelligence, has joined Imagine Learning as Chief Technology Officer (CTO). In this role, he will focus on advancing the company’s Curriculum-Informed AI roadmap, which aims to enhance educator-trusted platforms that connect curriculum, insights, and educational impact.

Imagine Learning, based in Tempe, Arizona, is recognized as a leading provider of digital-first K–12 solutions in the United States. Dubey’s appointment is part of the company’s strategy to ensure that instructional rigor, educator trust, and adaptive innovation remain central to every product experience.

With over two decades of global experience in software engineering, AI innovation, and cloud platforms, Dubey brings a wealth of expertise to his new position. Most recently, he served as the Global Head of Platform Engineering at Honeywell, where he led engineering efforts for digital education platforms used across both K–12 and higher education sectors.

Leslie Curtis, Executive Vice President and Chief Administrative Officer of Imagine Learning, expressed enthusiasm about Dubey’s appointment. “As we build the next era of learning technology, we are investing in leadership that understands both the complexity of enterprise-scale systems and the nuance of classroom impact,” she stated. “Anj’s deep background in SaaS products, data and AI platforms, and developer productivity makes him the ideal leader to power our next wave of curriculum-aligned innovation.”

Dubey’s extensive experience includes building Software as a Service (SaaS) platforms and AI-powered delivery pipelines. He has overseen global cloud infrastructure across major platforms such as AWS, Azure, and Google Cloud Platform (GCP), and has led teams of over 400 engineers across five regions. His contributions to the field are further underscored by multiple patents in hybrid and multi-cloud architectures, as well as the design of platforms serving more than 21 million users in both educational and industrial domains.

In his own words, Dubey expressed excitement about joining Imagine Learning at a crucial time. “This role is a chance to shape how AI can responsibly enhance instructional outcomes, deepen personalization, and support the educators who drive student success every day,” he said. “Our goal is to bring meaningful technology to classrooms — not just automation, but intelligence that understands and elevates learning.”

Dubey’s appointment reflects a broader trend within the education industry, which is increasingly seeking executive talent from cloud-native and AI-forward organizations. Imagine Learning’s strategic move underscores its commitment to maintaining its position as a market leader focused on instructional quality and platform intelligence.

As CTO, Dubey will oversee Imagine Learning’s engineering, DevOps, AI/ML, and cloud teams. His initial initiatives will focus on strengthening the company’s curricula data pipeline, accelerating time-to-insight for educators, and enhancing product reliability for over 18 million students across the nation.

Dubey holds a Bachelor of Technology degree in Electronics and Communication from Madan Mohan Malaviya University of Technology in India, as well as an Executive Certificate in Business Administration and Management from the Mendoza College of Business at the University of Notre Dame.

This appointment marks a significant step for Imagine Learning as it continues to innovate and adapt in the rapidly evolving landscape of educational technology, according to a company release.

Harvard Physicist Suggests Interstellar Object May Be Alien Probe

The discovery of a massive interstellar object, 3I/ATLAS, has sparked speculation among scientists, including a Harvard physicist, about its potential technological origins.

A recently discovered interstellar object, known as 3I/ATLAS, is raising eyebrows among astronomers due to its unusual characteristics. Harvard physicist Dr. Avi Loeb suggests that the object’s peculiar features may indicate it is more than just a typical comet.

“Maybe the trajectory was designed,” Dr. Loeb, a science professor at Harvard University, told Fox News Digital. “If it had an objective to sort of be on a reconnaissance mission, to either send mini probes to those planets or monitor them… It seems quite anomalous.”

First detected in early July by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Chile, 3I/ATLAS marks only the third time an interstellar object has been observed entering our solar system, according to NASA.

While NASA has classified 3I/ATLAS as a comet, Dr. Loeb pointed out that images of the object reveal an unexpected glow appearing in front of it, rather than the typical tail that comets exhibit. “Usually with comets, you have a tail, a cometary tail, where dust and gas are shining, reflecting sunlight, and that’s the signature of a comet,” he explained. “Here, you see a glow in front of it, not behind it.”

Measuring approximately 20 kilometers across, 3I/ATLAS is larger than Manhattan and is unusually bright given its distance from the sun. However, Dr. Loeb emphasizes that its most striking feature is its trajectory.

“If you imagine objects entering the solar system from random directions, just one in 500 of them would be aligned so well with the orbits of the planets,” he noted. The interstellar object, which originates from the center of the Milky Way galaxy, is expected to pass near Mars, Venus, and Jupiter—an event that Dr. Loeb claims is highly improbable to occur by chance.

“It also comes close to each of them, with a probability of one in 20,000,” he added.

According to NASA, 3I/ATLAS will reach its closest point to the sun—approximately 130 million miles away—on October 30.

“If it turns out to be technological, it would obviously have a big impact on the future of humanity,” Dr. Loeb stated. “We have to decide how to respond to that.”

In January, astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics mistakenly identified a Tesla Roadster launched into orbit by SpaceX CEO Elon Musk as an asteroid, highlighting the complexities of identifying objects in space.

A spokesperson for NASA did not immediately respond to requests for comment regarding 3I/ATLAS, leaving the scientific community eager for further insights into this intriguing interstellar visitor.

As the object approaches its closest point to the sun, the implications of its unusual characteristics continue to fuel speculation and debate among astronomers and physicists alike, according to Fox News.

Apple Alerts Users to Security Vulnerability in Millions of iPhones

Apple has issued a warning that a significant security flaw affects approximately 800 million iPhones, urging users to update to iOS 26.2 to mitigate critical vulnerabilities in Safari and WebKit.

Apple’s iPhone, the leading smartphone in the United States and widely used globally, is facing a serious security threat. Recent data indicates that a critical vulnerability could potentially impact around half of all iPhone users, leaving hundreds of millions of devices at risk.

Over the past few weeks, Apple has been alerting users to a significant security flaw that affects an estimated 800 million devices. This vulnerability stems from two critical issues identified in WebKit, the underlying engine that powers Safari and other browsers on iOS. According to Apple, these flaws have been exploited in sophisticated attacks targeting specific individuals, enabling malicious websites to execute harmful code on iPhones and iPads. This could allow attackers to gain control of the device, steal passwords, or access sensitive payment information simply by visiting a compromised site.

In response to this threat, Apple quickly released a software update to address the vulnerabilities. However, reports suggest that many users have yet to install the necessary update. Estimates indicate that approximately 50 percent of eligible users have not upgraded from iOS 18 to the latest version, iOS 26.2. This leaves a staggering number of devices vulnerable worldwide. According to data from StatCounter, the situation may be even more dire, with only about 20 percent of users having completed the update so far. As security details become public, the risk of exploitation increases significantly, as attackers are aware of the vulnerabilities to target.

Apple has specified that certain devices are affected by this vulnerability if they remain unupdated. Users are strongly encouraged to check their devices and ensure they have installed the latest software to protect against potential attacks.

There is no simple setting or browsing habit that can mitigate this issue; the vulnerability is embedded deep within the browser engine. Security experts emphasize that the only effective defense is to install the latest software update. Apple has also discontinued offering a security-only update for users who wish to remain on iOS 18. Unless a device cannot support iOS 26, the fix is only available through the latest versions of iOS 26.2 and iPadOS 26.2.

Updating is generally a straightforward process. If automatic updates are enabled, users may already have the fix installed. For those who need to update manually, the following steps are recommended: ensure the device is connected to Wi-Fi and has sufficient battery life or is plugged in for the update process.

While keeping your iPhone updated is crucial, it should not be the sole line of defense against threats. Utilizing strong antivirus software can provide an additional layer of protection by scanning for malicious links, blocking risky websites, and alerting users to suspicious activity before any damage occurs. This is particularly important given that many attacks exploit compromised websites or hidden browser vulnerabilities. Security software can help identify threats that may slip through and offer greater visibility into device activity.

Think of antivirus software as a backup protection measure. Software updates close known vulnerabilities, while robust antivirus tools help guard against emerging threats.

Apple’s use of the term “extremely sophisticated” in describing the threat underscores the seriousness of the situation. This flaw illustrates how even trusted browsers can become pathways for attacks when updates are delayed. Users who rely on their iPhones for banking, shopping, or work should treat this update as urgent.

As the landscape of cybersecurity continues to evolve, users are left to consider how long they typically wait before installing major iPhone updates. Is that delay worth the risk? Feedback and insights can be shared at Cyberguy.com.

For further information on the best antivirus protection options for Windows, Mac, Android, and iOS devices, visit Cyberguy.com.

According to CyberGuy.com, staying informed and proactive about software updates is essential for maintaining device security.

Andreessen Horowitz Invests $3 Billion in AI Infrastructure Development

Venture capital firm Andreessen Horowitz has made a significant investment of $3 billion in artificial intelligence infrastructure, reflecting its confidence in the sector’s long-term growth potential.

Andreessen Horowitz, one of Silicon Valley’s most influential venture capital firms, is making a bold investment in the future of artificial intelligence (AI), but its approach diverges from the trends seen in the industry.

Commonly referred to as a16z, the firm has committed approximately $3 billion to companies focused on developing the software infrastructure that supports AI. This investment highlights both a strong belief in the long-term growth of AI and a cautious stance regarding the inflated valuations that have characterized the industry in recent years.

In 2024, Andreessen Horowitz launched a dedicated AI infrastructure fund with an initial investment of $1.25 billion. This fund specifically targets startups that create essential tools for developers and enterprises, rather than the more glamorous consumer products dominating headlines. In January, the firm announced an additional investment of around $1.7 billion, bringing its total commitment to approximately $3 billion.

The focus of this fund is on what a16z defines as AI infrastructure. This includes systems that assist technical teams in building, securing, and deploying AI technologies. Key areas of investment encompass coding platforms, foundational model technologies, and networking security tools that are integral to the operation of AI systems.

This strategic move reflects a nuanced understanding of the current landscape, often referred to as the AI bubble. While soaring valuations have drawn parallels to previous tech booms, leaders at Andreessen Horowitz assert that the current frenzy obscures significant advancements occurring beneath the surface.

“Some of the most important companies of tomorrow will be infrastructure companies,” stated Raghuram, a managing partner at the firm and former CEO of VMware, in a recent statement.

The firm’s investment strategy is already yielding positive results. Several AI startups backed by Andreessen Horowitz have achieved lucrative exits or formed valuable partnerships. For instance, Stripe announced its acquisition of Metronome, an AI billing platform supported by the fund, for approximately $1 billion. Additionally, major tech corporations such as Salesforce and Meta have acquired other AI services backed by the firm.

One notable success story is Cursor, an AI coding startup whose valuation skyrocketed to about $29.3 billion last year, a remarkable increase from the $400 million valuation at the time of Andreessen Horowitz’s initial investment.

Despite these successes, concerns linger regarding the overall health of the industry. Critics argue that many private valuations are disconnected from sustainable business fundamentals, with some startups being valued as if they are poised to revolutionize entire sectors overnight.

Ben Horowitz, co-founder and general partner of Andreessen Horowitz, acknowledged that it is premature to draw definitive conclusions about the fund’s performance, which is typically assessed over a decade or more. Nevertheless, he described the fund as “one of the best funds, like, I’ve ever seen.”

The investment strategy is supported by a leadership team that brings a diverse perspective to the table. Martin Casado, a former computational physicist and seasoned coder who oversees the infrastructure unit, noted that while private valuations may appear “crazy,” the demand for AI-focused tools and services remains strong.

Industry analysts suggest that even if certain segments of the market experience a slowdown, a focus on foundational software—rather than merely trendy applications—could position Andreessen Horowitz favorably for the long term.

As the tech sector continues to evolve, the implications of this $3 billion investment will be closely monitored. Whether it will prove successful during a potential tech downturn or reshape how companies implement AI remains one of the most anticipated experiments in the industry.

According to The American Bazaar, Andreessen Horowitz’s strategic focus on AI infrastructure positions it uniquely within a rapidly changing technological landscape.

Novartis Appoints Indian-American Gayathri Raghupathy as Executive Director of AI and Process Excellence

Novartis has appointed Gayathri Raghupathy as Executive Director of Functional AI and Process Excellence, where she will leverage AI to enhance processes and focus on patient care.

Leading innovative medicines company Novartis has announced the appointment of Indian American scientist Gayathri Raghupathy as Executive Director of Functional AI and Process Excellence in U.S. Medical.

In her new role, Raghupathy will collaborate with cross-functional teams to harness artificial intelligence, reimagine critical processes, and scale intelligent solutions that prioritize science and patient care, according to a media release.

“Excited to share about joining Novartis,” Raghupathy expressed on LinkedIn. “I will be working with some amazing teams to harness AI, reimagine processes, and scale intelligent solutions that free us to focus on what matters most: science and patients.”

She also reflected on her career journey, stating, “Grateful for the journey from the lab to medical communications to building AI products in a startup environment, and for the incredible partners who helped shape this path. There’s so much to learn and grow into, and I can’t imagine a better place than Novartis, with its deep commitment to innovation and patients.”

Raghupathy describes herself as a “scientist turned AI strategist who loves turning fuzzy challenges into clear AI opportunities.” She emphasizes her focus on creating AI solutions that address real pain points, connecting various domains such as science, data, process, and operations to design scalable solutions.

“I thrive in fast-paced, 0-to-1 environments where experimentation and teamwork drive progress,” she noted. “Always curious, always learning, and excited about the next wave of human-centered AI in healthcare.”

Prior to her role at Novartis, Raghupathy spent over six years at Kognitic, Inc., a startup where she played a pivotal role in shaping the scientific and business strategy behind AI-enabled intelligence solutions. Most recently, she served as Chief Strategy Officer, having previously held positions such as Vice President of Scientific Strategy and Lead of Scientific & Business Strategy. Her work at Kognitic included driving product innovation, enhancing data quality processes, and collaborating with marketing and medical affairs leaders in the pharmaceutical sector to achieve comprehensive outcomes.

Earlier in her career, Raghupathy worked at BGB Group as a Medical Writer, where she supported scientific content development across various initiatives, including congress planning, promotional strategy, competitive intelligence, and digital education. She also created physician-facing materials and training assets for medical and commercial teams.

Raghupathy’s foundational experience includes co-founding CUNY Biotech and GRO-Biotech, community-led initiatives aimed at connecting life-science researchers with the biopharma ecosystem. Her academic background features a PhD in Molecular, Cell, and Developmental Biology from the Graduate Center at the City University of New York, where her research focused on gene regulation relevant to advancements in T-cell gene therapy.

As she embarks on this new chapter at Novartis, Raghupathy is poised to make significant contributions to the integration of AI in healthcare, ultimately enhancing patient outcomes and driving innovation in the medical field.

The information in this article is based on a media release from Novartis.

Fiber Broadband Provider Investigates Data Breach Impacting One Million Users

Brightspeed is investigating a potential security breach that may have exposed sensitive data of over 1 million customers, as hackers claim to have accessed personal and payment information.

Brightspeed, one of the largest fiber broadband providers in the United States, is currently investigating claims of a significant security breach that allegedly involves sensitive data tied to more than 1 million customers. The allegations emerged when a group identifying itself as the Crimson Collective posted messages on Telegram, warning Brightspeed employees to check their emails. The group asserts it has access to over 1 million residential customer records and has threatened to release sample data if the company does not respond.

As of now, Brightspeed has not confirmed any breach. However, the company stated that it is actively investigating what it refers to as a potential cybersecurity event. According to the Crimson Collective, the stolen data includes a wide array of personally identifiable information. If these claims are accurate, the data could pose serious risks for identity theft and fraud for affected customers.

Brightspeed has emphasized its commitment to addressing the situation. In a statement shared with BleepingComputer, the company indicated that it is rigorously monitoring threats and working to understand the circumstances surrounding the alleged breach. Brightspeed also mentioned that it will keep customers, employees, and authorities informed as more details become available.

Despite the ongoing investigation, there has been no public notice on Brightspeed’s website or social media channels confirming any exposure of customer data. Founded in 2022, Brightspeed is a U.S. telecommunications and internet service provider that emerged after Apollo Global Management acquired local exchange assets from Lumen Technologies. Headquartered in Charlotte, North Carolina, the company serves rural and suburban communities across 20 states and has rapidly expanded its fiber footprint, reaching over 2 million homes and businesses with plans to extend to over 5 million locations.

Given Brightspeed’s focus on underserved areas, many customers rely on the company as their primary internet provider, making any potential breach particularly concerning. The Crimson Collective is not new to targeting high-profile entities. In October, the group breached a GitLab instance associated with Red Hat, stealing hundreds of gigabytes of internal development data. This incident later had repercussions, as Nissan confirmed in December that personal data for approximately 21,000 Japanese customers was exposed through the same breach.

More recently, researchers have noted that the Crimson Collective has targeted cloud environments, including Amazon Web Services, by exploiting exposed credentials and creating unauthorized access accounts to escalate privileges. This track record adds weight to the group’s claims, making them difficult to dismiss.

Even though Brightspeed has yet to confirm a breach, the mere existence of these claims raises significant concerns. If customer data has indeed been accessed, it could be exploited for phishing scams, account takeovers, or payment fraud. Cybercriminals often act quickly following breaches, which means customers should remain vigilant even before an official notice is issued.

A spokesperson for Brightspeed stated, “We take the security of our networks and the protection of our customers’ and employees’ information seriously and are rigorous in securing our networks and monitoring threats. We are currently investigating reports of a cybersecurity event. As we learn more, we will keep our customers, employees, stakeholders, and authorities informed.”

While the investigation unfolds, customers are encouraged to take proactive steps to protect themselves. Most data breaches lead to similar downstream risks, including phishing scams, account takeovers, and identity theft. Establishing good security habits now can help safeguard online accounts.

Scammers often exploit breach headlines to create panic. Customers should be cautious with emails, calls, or texts that mention internet account billing problems or service changes. If a message creates a sense of urgency or pressure, it is advisable to pause before responding. Avoid clicking on links or opening attachments related to account notices or payment issues. Instead, open a new browser window and navigate directly to the company’s official website or app.

Utilizing strong antivirus software can provide an additional layer of protection against malicious downloads. This software can also alert users to phishing emails and ransomware scams, helping to keep personal information and digital assets secure.

Changing Brightspeed account passwords and reviewing passwords for other important accounts is also recommended. Users should create strong, unique passwords that are not reused elsewhere. A trusted password manager can assist in generating and storing complex passwords, making account takeovers more difficult.

Customers should also check if their email addresses have been exposed in past breaches. Some password managers include built-in breach scanners that can identify whether email addresses or passwords have appeared in known leaks. If a match is found, it is crucial to change any reused passwords and secure those accounts with new, unique credentials.

Personal data can quietly circulate across data broker sites. Employing a data removal service can help limit the amount of personal information available publicly. While no service can guarantee complete removal of data from the internet, these services actively monitor and systematically erase personal information from numerous websites, reducing the risk of scammers targeting individuals.

Brightspeed allows customers to activate account and billing alerts through the My Brightspeed site or app. Users can select which notifications they wish to receive via email or text. These alerts can help detect unusual activity early and enable prompt responses to potential threats.

Regularly checking bank and credit card statements is also advisable. Customers should look for small or unfamiliar charges, as criminals may test stolen data with low-dollar transactions before attempting larger fraud. If sensitive information may have been compromised, placing a fraud alert or credit freeze can provide additional protection, making it more challenging for criminals to open new accounts in a victim’s name.

Brightspeed’s investigation is ongoing, and the company has pledged to share updates as more information becomes available. The situation underscores the increasing value of customer data and the aggressive tactics employed by extortion groups targeting infrastructure providers. For customers, exercising caution remains the best defense, while transparency and prompt action will be crucial for companies if these claims prove to be valid.

For more information on protecting personal data and staying informed about cybersecurity threats, visit CyberGuy.com.

WhatsApp Web Malware Automatically Distributes Banking Trojan to Users

A new malware campaign is exploiting WhatsApp Web to spread Astaroth banking trojan through trusted conversations, posing significant risks to users.

A recent malware campaign is transforming WhatsApp Web into a tool for cybercriminals. Security researchers have identified a banking Trojan linked to Astaroth that spreads automatically through chat messages, complicating efforts to halt the attack once it begins. This campaign, dubbed Boto Cor-de-Rosa, highlights the evolving tactics of cybercriminals who exploit trusted communication platforms.

The attack primarily targets Windows users, utilizing WhatsApp Web as both the delivery mechanism and the means of further spreading the infection. The process begins innocuously with a message from a contact containing what appears to be a harmless ZIP file. The file name is designed to look random and benign, which reduces the likelihood of suspicion.

Upon opening the ZIP file, users unwittingly execute a Visual Basic script disguised as a standard document. If the script is run, it quietly downloads two additional pieces of malware, including the Astaroth banking trojan, which is written in Delphi. Additionally, a Python-based module is installed to control WhatsApp Web, allowing the malware to operate in the background without any obvious warning signs. This self-sustaining infection mechanism makes the campaign particularly dangerous.

What sets this campaign apart is its method of propagation. The Python module scans the victim’s WhatsApp contacts and automatically sends the malicious ZIP file to every conversation. Researchers from Acronis have noted that the malware even tailors its messages based on the time of day, often including friendly greetings to make the communication feel familiar. Messages such as “Here is the requested file. If you have any questions, I’m available!” appear to come from trusted contacts, leading many recipients to open them without hesitation.

The malware is also designed to monitor its own effectiveness in real time. The propagation tool tracks the number of successfully delivered messages, failed attempts, and the overall sending speed. After every 50 messages, it generates progress updates, allowing attackers to measure their success quickly and adapt their strategies as needed.

To evade detection by antivirus software, the initial script is heavily obfuscated. Once executed, it launches PowerShell commands that download additional malware from compromised websites, including a known domain, coffe-estilo.com. The malware installs itself in a folder that mimics a Microsoft Edge cache directory, containing executable files and libraries that comprise the full Astaroth banking payload. This allows the malware to steal credentials, monitor user activity, and potentially access financial accounts.

WhatsApp Web’s popularity stems from its ability to mirror phone conversations on a computer, making it convenient for users to send messages and share files. However, this convenience also introduces significant risks. When users connect their phones to WhatsApp Web by scanning a QR code at web.whatsapp.com, the browser session becomes a trusted extension of their account. This means that if malware gains access to a computer with an active WhatsApp Web session, it can act on behalf of the user, reading messages, accessing contact lists, and sending files that appear legitimate.

This exploitation of WhatsApp Web as a delivery system for malware is particularly concerning. Rather than infiltrating WhatsApp itself, attackers take advantage of an open browser session to spread malicious files automatically. Many users remain unaware of the potential dangers, as WhatsApp Web often feels harmless and is frequently left signed in on shared or public computers. In these scenarios, malware does not require sophisticated methods; it simply needs access to a trusted session.

To mitigate the risks associated with this type of malware, users should adopt several smart habits. First and foremost, never open ZIP files sent through chat unless you have confirmed the sender’s identity. Be cautious of file names that appear random or unfamiliar, and treat messages that create a sense of urgency or familiarity as potential warning signs. If a file arrives unexpectedly, take a moment to verify its authenticity before clicking.

Additionally, users should regularly check active WhatsApp Web sessions and log out of any that are unrecognized. Avoid leaving WhatsApp Web signed in on shared or public computers, and enable two-factor authentication (2FA) within WhatsApp settings. Limiting web access can significantly reduce the potential spread of malware.

Keeping devices updated is also crucial. Installing Windows updates promptly and ensuring that web browsers are fully updated can close many vulnerabilities that attackers exploit. Strong antivirus software is essential for monitoring script abuse and PowerShell activity in real time, providing an additional layer of protection against malware.

Banking malware is often associated with identity theft and financial fraud. To minimize the fallout from such attacks, consider reducing your digital footprint. Data removal services can assist in removing personal information from data broker sites, making it harder for criminals to exploit your details if malware infiltrates your device. While no service can guarantee complete data removal from the internet, these services actively monitor and erase personal information from numerous websites, enhancing your privacy.

Even with robust security measures in place, financial monitoring adds another layer of protection. Identity theft protection services can track suspicious activity related to your credit and personal data, alerting you if your information is being sold on the dark web or used to open unauthorized accounts. Setting up alerts for bank and credit card transactions can help you respond quickly to any irregularities.

Most malware infections occur when users act too quickly. If a message feels suspicious, trust your instincts. Familiar names and friendly language can lower your guard, but they should never replace caution. Taking a moment to verify the authenticity of a message or file can prevent significant damage.

This WhatsApp Web malware campaign serves as a stark reminder that cyberattacks are increasingly sophisticated, often blending seamlessly into everyday conversations. The ease with which this threat can spread from one device to many is alarming. A single click can transform a trusted chat into a vehicle for banking malware and identity theft. Fortunately, simple changes in behavior, such as being vigilant about attachments, securing WhatsApp Web access, keeping devices updated, and exercising caution before clicking, can significantly reduce the risk of falling victim to such attacks.

As messaging platforms continue to play a larger role in our daily lives, maintaining awareness and adopting simple security habits is essential. Do you believe messaging apps are doing enough to protect users from malware that spreads through trusted conversations? Share your thoughts with us.

According to Source Name.

India’s Vision for AI Discussed at Washington Embassy Meeting

India’s Deputy Chief of Mission in Washington outlined the nation’s vision for artificial intelligence at a recent event, emphasizing the upcoming AI Impact Summit’s focus on practical outcomes for people, the planet, and progress.

WASHINGTON, DC — India is set to host the AI Impact Summit in New Delhi, which will revolve around three core themes: people, planet, and progress. The summit aims to transition global discussions on artificial intelligence from theoretical principles to actionable outcomes, according to Namgya Khampa, India’s Deputy Chief of Mission in Washington.

Khampa made these remarks during the “US-India Strategic Cooperation on AI” discussion, organized by the Observer Research Foundation America, the Special Competitive Studies Project, and the Embassy of India. The event, held at the US Capitol, convened policymakers and experts to outline shared priorities ahead of the summit.

She emphasized that artificial intelligence has evolved from a niche technology into a fundamental component that shapes economic competitiveness, geopolitical power, and societal outcomes.

India’s approach to AI is deeply rooted in its experience with digital public infrastructure. Khampa highlighted how inclusive, interoperable, and cost-effective technology has the potential to transform governance on a large scale. She pointed to platforms like Aadhaar and the Unified Payments Interface, which have significantly expanded access to public services, finance, and identity for over 1.4 billion Indians.

Khampa described AI as a “force multiplier” that enhances existing digital public infrastructure, making systems smarter, more responsive, productive, and accessible. This perspective aims to shift AI from being an abstract concept to a practical tool that drives transformation in everyday life.

The AI Impact Summit is notable for being the first major global AI summit hosted by a country from the Global South. Khampa stated that the summit seeks to address imbalances in global AI governance by promoting broader participation and ownership, rather than compromising on standards.

She elaborated on the summit’s framework, reiterating the themes of people, planet, and progress, which reflect India’s vision of “AI for all.” According to Khampa, AI should empower individuals rather than marginalize them, be resource-efficient, align with sustainability goals, and foster equitable economic growth, particularly in sectors like healthcare, education, agriculture, and public service delivery.

In light of increasing geopolitical tensions and the weaponization of technology supply chains, Khampa noted that technological resilience has become a central aspect of national strategy. She highlighted the India-US trust initiative as a means to transition cooperation from conceptual discussions to concrete projects across research, standards, skill development, and next-generation technologies.

India’s linguistic diversity and its population-scale digital platforms provide a unique environment for developing inclusive, multilingual AI systems. Meanwhile, the United States contributes cutting-edge research, capital, and advanced use cases that can be tested in India and scaled globally.

As the AI Impact Summit approaches, it is clear that India is positioning itself as a leader in the global dialogue on artificial intelligence, advocating for a vision that prioritizes inclusivity, sustainability, and practical benefits for all.

According to IANS, the summit is expected to set a precedent for future discussions on AI governance and cooperation.

OpenAI Introduces Advertising Features to ChatGPT Platform

OpenAI is set to introduce advertising in ChatGPT for U.S. users on its free and Go-tier plans, marking a significant shift in its revenue strategy.

OpenAI is preparing to test advertisements within ChatGPT, targeting users of its free version and the newly launched Go-tier plan in the United States. This initiative aims to alleviate the financial pressures associated with developing and maintaining advanced artificial intelligence systems.

The company announced on Friday that the ads will begin appearing in the coming weeks, clearly distinguished from the AI-generated responses that users receive. Users subscribed to OpenAI’s higher-tier plans—Plus, Pro, Business, and Enterprise—will not encounter these advertisements.

OpenAI emphasized that the introduction of ads will not affect the quality or integrity of ChatGPT’s responses. Furthermore, user conversations will remain confidential and will not be shared with advertisers.

This move represents a significant shift for OpenAI, which has primarily relied on subscription revenue up to this point. It also highlights the increasing financial challenges the company faces as it invests billions in data centers and prepares for a highly anticipated initial public offering.

Despite currently operating at a loss, OpenAI has projected that it will spend over $1 trillion on AI infrastructure by 2030. However, the company has yet to disclose a detailed plan for funding this extensive expansion.

Industry analysts suggest that advertising could become a vital new revenue stream for ChatGPT, which currently boasts approximately 800 million weekly active users. Nevertheless, they caution that this strategy carries inherent risks, including the potential to alienate users and diminish trust if the ads are perceived as intrusive or poorly integrated.

“If ads come off as clumsy or opportunistic, people won’t hesitate to jump ship,” warned Jeremy Goldman, an analyst at Emarketer. He noted that alternatives like Google’s Gemini or Anthropic’s Claude are readily available to users seeking ad-free experiences.

Goldman also indicated that OpenAI’s decision to incorporate ads could have broader implications for the industry, compelling competitors to clarify their own monetization strategies, particularly those that promote themselves as “ad-free by design.”

OpenAI has assured users that advertisements will not be displayed to individuals under the age of 18 and that sensitive topics, such as health and politics, will be excluded from advertising content.

According to the company, ads will be tested at the bottom of ChatGPT responses when relevant sponsored products or services align with the ongoing conversation. This approach aims to ensure that advertisements are contextually appropriate and minimally disruptive.

Advertisers are increasingly optimistic about AI’s potential to enhance results across search and social media platforms, believing that more sophisticated recommendation systems will lead to more effective and targeted advertising.

Additionally, OpenAI confirmed that its ChatGPT Go plan, initially launched in India, will soon be available in the U.S. at a monthly subscription price of $8.

This new advertising initiative marks a pivotal moment for OpenAI as it seeks to balance user experience with the need for sustainable revenue growth, navigating the challenges of an evolving digital landscape.

For more details, refer to American Bazaar.

Humans in the Loop: Tribal Wisdom and AI Bias Challenges

Independent film ‘Humans in the Loop’ explores the intersection of tribal wisdom and artificial intelligence, highlighting the importance of human input in technology.

Independent films often struggle to find their footing in the vast landscape of mainstream cinema. However, Humans in the Loop (2024), now streaming on Netflix, has carved out a niche for itself, thanks in part to the involvement of executive producer Kiran Rao. The film draws inspiration from a 2022 article by journalist Karishma Mehrotra in FiftyTwo, titled “Human Touch.” It follows the story of Nehma, an Adivasi woman from the Oraon tribe in Jharkhand, who returns to her ancestral village after a broken relationship and faces the challenge of supporting her children.

To make ends meet, Nehma takes a job as a data labeller at an AI data center, where she assigns labels to images and videos to help train AI systems. As she immerses herself in this work, she begins to recognize that the categories she is asked to define and the systems she is contributing to may harbor biases that are disconnected from her cultural understanding of nature, community, and labor.

One of the film’s emotional cores lies in the relationship between Nehma and her daughter, Dhaanu. While Dhaanu is drawn toward the urban world, Nehma feels a strong pull back to her land and traditions. Yet, she is also compelled to embrace this new mode of work. The film captures this dynamic beautifully, avoiding forced sentimentality.

Watching Humans in the Loop evokes a sense of quiet tension, navigating the complexities of place and displacement, tradition and technology, caregiving and coded labor. Viewers find themselves rooting for Nehma not only as a mother striving to support her children but also as a subtle force challenging conventional notions of progress.

The film employs contrasting spaces to enhance its narrative: the lush, vibrant village juxtaposed with the sterile, screen-filled environment of the data lab. These visual contrasts underscore the film’s exploration of loops—nature versus technology, labor versus identity, home versus exile. The sound design is particularly evocative, intertwining the natural sounds of the forest with the digital hum of the lab, creating a soulful auditory backdrop.

In addressing the theme of AI’s potential to enhance tribal lives, the film does not take an anti-AI stance. Instead, it posits that when AI systems integrate the labor, perspectives, and knowledge of tribal communities, they can become tools of recognition and empowerment. Nehma’s insistence on shaping the labels and incorporating her lived ecological knowledge into the system illustrates that technology can serve as a site of agency rather than mere extraction.

This hopeful loop suggests that humans can train machines, and in turn, the outputs of these machines can reflect that training. Nehma’s journey emphasizes that individuals can learn not only to survive but also to assert their knowledge. When approached ethically and collaboratively, AI can become part of a cycle of continuity, serving not as a break from tradition but as a tool to sustain and evolve it.

Titled after the human-in-the-loop (HITL) approach, which actively integrates human input and expertise into machine learning and AI systems, Humans in the Loop stands as a quietly significant film. Director Aranya Sahay has crafted a narrative that speaks to the age of AI while honoring the human experience—the laborer, the mother, the land. As discussions surrounding AI and equity continue to grow, this film is poised to resonate even more deeply over time, according to India Currents.

GTA 6 Online Mode Details Leaked in Court Documents

New details about GTA 6’s online mode have emerged from court documents, suggesting the game may feature 32-player lobbies ahead of its anticipated release on November 19, 2026.

New insights into the online mode of Grand Theft Auto VI (GTA 6) have surfaced from court documents related to a legal dispute involving Rockstar Games and its former employees. This information, which has not been officially confirmed by Rockstar, offers a glimpse into the multiplayer component of the highly anticipated game, set to be released on November 19, 2026, for PlayStation 5 and Xbox Series X/S.

Rockstar has maintained a tight lid on the details surrounding GTA 6’s multiplayer features. However, recent revelations from a tribunal in the UK indicate that the game may support up to 32 players in a single session, mirroring the current setup in GTA Online.

The details emerged during a legal hearing concerning the termination of over 30 developers at Rockstar, which is tied to allegations of leaking confidential information on a private Discord channel associated with the Independent Workers’ Union of Great Britain (IWGB). During the proceedings, Rockstar disclosed that certain internal messages discussed game features deemed “top secret.” Among these was a reference to a “large session” involving 32 players, which many have interpreted as a significant hint regarding the online mode.

According to the court documents, the leaked information stemmed from internal Discord messages where a former employee noted that Rockstar faced challenges in organizing playtests due to the need for 32-player sessions. Another developer questioned the difficulty of arranging such sessions, suggesting that multiple studios with quality assurance testers should be able to manage it.

While Rockstar has yet to officially confirm any multiplayer features for GTA 6, the leak aligns with the existing 32-player limit in GTA Online, providing one of the clearest indications of the online ambitions for the upcoming title.

Fans of the franchise have high expectations for GTA 6 Online, particularly given the success of GTA Online, which set a high standard for open-world multiplayer experiences. Many anticipate that the new installment will introduce innovative mechanics, expansive maps, fresh missions, and enhanced social features. Currently, the only confirmed detail is the proposed 32-player limit for at least one type of online session.

In the midst of these developments, Rockstar has defended its decision to terminate the employees, asserting that the dismissals were due to the leaking of confidential information rather than any union-related activities. The company claims that sharing sensitive game details violated internal policies. Conversely, the IWGB and the dismissed developers contend that the firings were unjust and linked to union activism.

A recent ruling by a UK judge determined that Rockstar is not obligated to provide interim back pay to the terminated staff, which supports the studio’s position regarding confidentiality breaches.

The significance of the 32-player detail lies in its origin; it comes from official court documents rather than speculative leaks. While this number may seem modest compared to earlier rumors of larger player limits, it suggests that Rockstar may be adopting a familiar multiplayer structure as a foundation for GTA 6.

It remains uncertain whether the online mode will launch with additional player limits or game modes that could accommodate more than 32 players. Rockstar has not publicly commented on these possibilities. For now, this insight derived from court proceedings offers fans their first credible look at the multiplayer potential of GTA 6 as the release date approaches.

As anticipation builds, Rockstar has officially confirmed that GTA 6 will be available on November 19, 2026, for PS5 and Xbox Series X/S, with expectations for additional platform releases to follow. Fans are eagerly awaiting what is poised to be one of the most significant gaming releases in recent years, according to The Sunday Guardian.

Taiwan Plans $250 Billion Investment in U.S. Semiconductor Manufacturing

Taiwan has committed to investing $250 billion in U.S. semiconductor manufacturing, aiming to enhance domestic production capabilities and reduce reliance on foreign supply chains.

The U.S. Department of Commerce announced on Thursday that Taiwan will invest $250 billion to bolster semiconductor manufacturing in the United States. This significant deal, signed during the Trump administration, aims to enhance domestic production capabilities in a sector critical to both the economy and national security.

Under the agreement, Taiwanese semiconductor and technology companies will make direct investments in the U.S. semiconductor industry. These investments are expected to cover a range of areas, including semiconductors, energy, and artificial intelligence (AI) production and innovation. Currently, Taiwan is responsible for producing more than half of the world’s semiconductors, highlighting its pivotal role in the global supply chain.

In addition to the direct investments, Taiwan will provide $250 billion in credit guarantees to facilitate further investments from its semiconductor and tech enterprises. However, the timeline for these investments remains unspecified.

In exchange for Taiwan’s substantial investment, the United States plans to invest in various sectors within Taiwan, including semiconductor manufacturing, defense, AI, telecommunications, and biotechnology. The specific amount of this reciprocal investment has not been disclosed.

This announcement follows a proclamation from the Trump administration that reiterated the U.S. goal of increasing domestic semiconductor manufacturing. The proclamation emphasized that reliance on foreign supply chains poses significant economic and national security risks. “Given the foundational role that semiconductors play in the modern economy and national defense, a disruption of import-reliant supply chains could strain the United States’ industrial and military capabilities,” it stated.

Additionally, the proclamation introduced a 25% tariff on certain advanced AI chips and indicated that further tariffs on semiconductors would be considered once trade negotiations with other countries, including the deal with Taiwan, are finalized.

In 2025, semiconductor manufacturing has become a focal point of Trump’s economic agenda, with efforts aimed at reducing U.S. dependence on foreign chip production. The administration has proposed aggressive trade measures, including a potential 100% tariff on imported semiconductors, although companies that commit to establishing manufacturing facilities in the U.S. may be eligible for exemptions.

Last year, Taiwan Semiconductor Manufacturing Company (TSMC) announced plans to invest $100 billion to enhance chip manufacturing capabilities in the United States, further underscoring the importance of this sector.

Semiconductors are essential components of modern technology, powering a wide array of devices, from smartphones and automobiles to telecommunications equipment and military systems. The U.S. share of global wafer fabrication has significantly declined, dropping from 37% in 1990 to less than 10% in 2024. This shift has largely been attributed to foreign industrial policies that favor production in East Asia.

As the U.S. seeks to reclaim its position in the semiconductor industry, the partnership with Taiwan represents a critical step towards enhancing domestic manufacturing capabilities and securing supply chains.

This initiative reflects a broader strategy to strengthen the U.S. economy and safeguard national interests in an increasingly competitive global landscape, according to The American Bazaar.

RCB Introduces AI Solution for Crowd Management at Chinnaswamy Stadium

RCB is investing Rs 4.5 crore in an AI-enabled project to enhance crowd management and safety at M. Chinnaswamy Stadium during IPL 2026.

Royal Challengers Bangalore (RCB) is taking a significant step towards improving the matchday experience at M. Chinnaswamy Stadium by investing Rs 4.5 crore in an innovative project aimed at crowd management and safety.

In partnership with Staqu, a technology firm specializing in artificial intelligence, RCB plans to implement advanced facial recognition and intelligent monitoring systems. This initiative is designed to enhance public safety and ensure a seamless experience for fans attending matches.

The deployment of these technologies is expected to address crowd-related issues that have been a concern in previous seasons. By utilizing AI, RCB aims to streamline entry processes and monitor crowd behavior effectively, thereby reducing the likelihood of incidents and improving overall security.

As the Indian Premier League (IPL) continues to grow in popularity, the need for enhanced safety measures has become increasingly important. RCB’s proactive approach reflects a commitment to not only provide an enjoyable atmosphere for fans but also to prioritize their safety during events.

With the introduction of this AI-enabled solution, RCB hopes to set a new standard for crowd management in sports venues across India. The project signifies a forward-thinking approach to leveraging technology in enhancing the spectator experience.

According to NDTV, the collaboration with Staqu marks a significant investment in the future of sports management, showcasing RCB’s dedication to innovation and fan engagement.

Can Autonomous Trucks Enhance Highway Safety and Reduce Accidents?

Kodiak AI’s autonomous trucks have successfully driven over 3 million miles, demonstrating the potential for self-driving technology to enhance highway safety in real-world conditions.

Kodiak AI, a prominent player in the field of AI-powered autonomous driving technology, has been quietly proving the viability of self-driving trucks on actual highways. The company’s flagship system, known as the Kodiak Driver, integrates advanced software with modular, vehicle-agnostic hardware, creating a cohesive platform designed for the complexities of real-world trucking.

As Kodiak AI explains, the Kodiak Driver is not just a theoretical solution; it is built to address the challenges of highways, varying weather conditions, driver fatigue, and the demands of long-haul transportation. This practical approach is essential, as trucking is far from a controlled laboratory environment.

In a recent episode of CyberGuy’s “Beyond Connected” podcast, Kurt spoke with Daniel Goff, vice president of external affairs at Kodiak AI, about the evolving perceptions surrounding autonomous trucks. Goff reflected on the initial skepticism the company faced when it was founded in 2018. “When I first started at the company, I said I worked for a company that was working to build trucks that drive themselves, and people kind of looked at me like I was crazy,” he recalled. However, he noted a significant shift in public sentiment as autonomous vehicles have begun to demonstrate their capabilities beyond mere hype.

One of Kodiak AI’s key arguments is that machines can mitigate many risks associated with human driving. Goff emphasized, “This technology doesn’t get distracted. It doesn’t check its phone. It doesn’t have a bad day to take it out on the road. It doesn’t speed.” In the trucking industry, where safety is paramount, these “boring” characteristics of autonomous vehicles can be advantageous.

Kodiak AI has been actively operating freight routes for several years, rather than solely conducting tests in controlled environments. The company has a command center in Lancaster, Texas, which has facilitated deliveries to cities such as Houston, Oklahoma City, and Atlanta since 2019. During these operations, a safety driver is present to take control if necessary, allowing Kodiak to refine its technology in real-world conditions.

Long-haul trucking is crucial to the U.S. economy, yet it is also one of the most demanding and hazardous professions. Drivers often spend extended periods away from home, working long hours while managing heavy vehicles under various conditions. Goff pointed out that the job’s challenges are compounded by federal regulations that limit driving hours to reduce fatigue. “Driving a truck is one of the most difficult and dangerous jobs that people do in the United States every day,” he said. With a growing number of drivers retiring and fewer individuals entering the profession, the industry is experiencing a significant driver shortage.

Kodiak AI believes that autonomous technology is best suited for the most challenging and repetitive tasks within trucking. Goff explained, “The goal for this technology is really best suited for those really tough jobs—the long lonely highway miles, the trucking in remote locations where people either don’t want to live or can’t easily live.” He also noted that many trucks are idle for a significant portion of the day, with the average truck being driven only about seven hours daily. Autonomous technology could help optimize this by enabling trucks to operate around the clock, only stopping for refueling and safety inspections.

With over 3 million miles driven, Kodiak AI has established a strong safety record, with a safety driver present for most of those miles. Goff highlighted the scale of their operations by comparing it to the average American’s lifetime driving distance of approximately 800,000 miles. “We’re at almost four average lifetimes with our system today,” he stated. The company also utilizes computer simulations and various assessments to evaluate the safety of its system.

In addition to long-haul operations, Kodiak AI collaborates with Atlas Energy Solutions for oil logistics in the Permian Basin. As of the third quarter of 2025, the company has delivered ten driverless trucks to Atlas, which autonomously transport sand around the clock without a human operator in the cab. Goff described this partnership as an ideal environment for testing and refining their long-haul operations.

Kodiak AI has sought third-party validation of its safety claims, including a study with Nauto, a leader in AI-enabled dashcams. The results indicated that Kodiak’s system achieved the highest safety score recorded by Nauto.

Policy and regulation also play a critical role in the adoption of autonomous trucking. Goff noted that 25 states have enacted laws allowing for the deployment of autonomous vehicles. He believes that the inherent dangers of driving make a compelling case for the technology. “People who think about transportation every day understand how dangerous driving a car is, driving a truck is, and just being on the road see the potential for this technology,” he said.

Despite the advancements, concerns about safety remain prevalent among advocates and everyday drivers. Critics question whether autonomous systems can respond adequately in emergencies or handle unpredictable human behavior on the road. Goff acknowledged these concerns, stating, “In this industry in particular, we really understand how important it is to be safe.” He emphasized that trust in autonomous systems must be earned through consistent real-world performance and transparent testing.

For everyday drivers, the prospect of sharing the road with autonomous vehicles can be unsettling, especially given the focus on potential failures in media coverage. However, Kodiak AI argues that the removal of human factors such as fatigue and distraction could lead to safer highways. If the technology continues to perform as claimed, it could result in fewer tired drivers on overnight routes, more reliable freight movement, and ultimately safer roads for all users.

As Kodiak AI continues to move freight and gather safety data on public roads, skepticism remains a vital aspect of the conversation surrounding autonomous trucking. The future of this technology will depend on its ability to demonstrate long-term safety benefits and earn the trust of the public, rather than relying on promises alone. The pressing question is no longer whether self-driving trucks can operate effectively, but whether they can consistently prove to enhance safety for everyone on the road.

For further insights, refer to CyberGuy.

Google Launches Program to Support Indian AI Startups Going Global

Google has launched a new Market Access Program aimed at helping Indian AI startups scale globally, coinciding with the projected growth of India’s AI market to $126 billion by 2030.

With India’s artificial intelligence (AI) market projected to reach $126 billion by 2030, Google has introduced a new Market Access Program designed to assist Indian AI startups in scaling their operations and expanding into global markets.

Announced during the Google AI Startups Conclave in New Delhi, the program aims to support startups from their initial seed stage to full-scale operations. Preeti Lobana, Vice President and Country Manager for Google India, emphasized the importance of this initiative, stating, “If you solve for India, you build for the world. Our focus now is accelerating how quickly Indian startups can scale, reach global markets, and deliver outcomes.”

Lobana noted that India’s AI startup ecosystem is entering a transformative phase, moving from prototypes to market-ready products and transitioning from early traction to sustainable business models. Google’s comprehensive support for startups encompasses capability building, real-world deployment, and scaling, addressing challenges at every critical stage of development.

The Market Access Program is specifically tailored for AI-first startups that are prepared to scale responsibly. It focuses on three key outcomes: enhancing enterprise readiness through global selling expertise, providing access to Google’s extensive enterprise network, and facilitating global immersion in key international markets.

To bolster the capabilities of these startups, Google also announced the upcoming Global AI Hub in Visakhapatnam. This facility, which will be powered by green energy and feature 1-gigawatt computing resources, is designed to equip startups with the high-performance computing necessary to refine their AI models on a global scale.

In addition to the Market Access Program, Google unveiled new updates to its Gemma model family, specifically targeting areas of rapid adoption in India, such as population-scale healthcare AI and action-oriented, on-device agents. The latest iteration, MedGemma 1.5, enhances Google’s health-focused AI initiatives by enabling developers to create applications that support complex medical imaging workflows.

The release of MedGemma 1.5 follows a collaboration between Google and the All India Institute of Medical Sciences (AIIMS), which is utilizing the model to develop India’s Health Foundation Models. This partnership contributes to the country’s Digital Public Infrastructure and enhances health outcomes across the ecosystem.

To support the growing demand for agent-based systems, Google introduced FunctionGemma, a specialized version of the Gemma 3 model. FunctionGemma is designed for function calling, allowing startups to translate natural language commands into executable actions. This capability enables the development of on-device, low-latency applications with automated workflows that prioritize user privacy and can function effectively on low-end devices without a constant internet connection.

Together, these advancements expand the toolkit available to Indian founders, facilitating the transition from experimentation to deployment across healthcare, enterprise, and consumer applications at scale. Lobana highlighted that these models are supported by popular tools throughout the development workflow, including Hugging Face Transformers, Unsloth, Keras, and NVIDIA NeMo.

Alongside the Conclave, Inc42 released the “Bharat AI Startups Report 2026,” which was supported by Google. The report reveals a significant shift in the AI ecosystem, with 47% of enterprises already moving from pilot projects to full production. It also notes a decrease in innovation costs, as historically high computing expenses have hindered Indian startups. With public resources lowering entry barriers, funding is increasingly directed toward product innovation rather than infrastructure costs.

India’s unique challenges, including its 22 languages, inconsistent connectivity, and price sensitivity, have often been viewed as obstacles. However, the report reframes these challenges as assets, suggesting that if an AI solution can effectively serve rural users in India, it is robust enough for global markets. The concept of “Bharat-tested” technology is emerging as a new benchmark for resilience.

The competitive landscape is shifting towards trust-by-design, with startups that prioritize safety, privacy, and security from the outset gaining a significant advantage in securing long-term enterprise contracts.

Ultimately, the success of AI initiatives will be measured by their outcomes. Examples include Cloudphysician, which has reduced ICU mortality rates by 40%, and Rocket Learning, which personalizes education for millions of students. Lobana concluded, “By stitching together skilling, capital, infrastructure, and market access, we are clearing the path for founders. As we look to the AI Impact Summit in February, the signal is clear: The future of AI isn’t just being used in India; it is being built here.”

According to Inc42, the launch of the Market Access Program marks a pivotal moment for Indian AI startups, positioning them to thrive in a rapidly evolving global landscape.

NASA’s Artemis II Mission Marks First Crewed Deep Space Flight in Over 50 Years

NASA is set to launch Artemis II on February 6, marking the return of humans to deep space for the first time in over 50 years with a historic 10-day mission around the Moon.

NASA has announced that it will return humans to deep space next month, targeting a launch date of February 6 for Artemis II. This 10-day crewed mission will carry astronauts around the Moon for the first time in more than half a century.

“We are going — again,” NASA stated in a post on X, confirming that the mission is scheduled to depart no earlier than February 6. The first available launch window will run from January 31 to February 14, with specific launch opportunities on February 6, 7, 8, 10, and 11.

If the launch is delayed, additional windows will open from February 28 to March 13, and from March 27 to April 10. During the February window, opportunities will be available on March 6, 7, 8, 9, and 11, while the April window will offer chances on April 1, 3, 4, 5, and 6.

The mission is set to lift off from Launch Complex 39B at NASA’s Kennedy Space Center in Florida, aboard the Space Launch System (SLS), the most powerful rocket the agency has ever constructed. Preparations are already underway to move the rocket to the launch pad, with the rollout expected to begin no earlier than January 17. This process involves a four-mile journey from the Vehicle Assembly Building to Launch Pad 39B aboard the crawler-transporter 2, which is anticipated to take up to 12 hours.

“We are moving closer to Artemis II, with rollout just around the corner,” said Lori Glaze, acting associate administrator for NASA’s Exploration Systems Development Mission Directorate. “We have important steps remaining on our path to launch, and crew safety will remain our top priority at every turn as we near humanity’s return to the Moon.”

The 322-foot rocket will carry four astronauts beyond Earth’s orbit to test the Orion spacecraft in deep space for the first time with a crew on board. This mission represents a significant milestone following the Apollo era, which last sent humans to the Moon in 1972.

The Artemis II crew includes NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with Canadian Space Agency astronaut Jeremy Hansen. This mission will be notable for being the first lunar mission to include a Canadian astronaut and the first to carry a woman beyond low Earth orbit.

After launch, the astronauts are expected to spend approximately two days near Earth to check Orion’s systems before igniting the spacecraft’s European-built service module to begin their journey toward the Moon.

This maneuver will send the spacecraft on a four-day trip around the far side of the Moon, tracing a figure-eight path that will take the crew more than 230,000 miles from Earth and thousands of miles beyond the lunar surface at its farthest point.

Rather than firing engines to return home, Orion will utilize a fuel-efficient free-return trajectory that leverages the gravitational forces of both Earth and the Moon to guide the spacecraft back to Earth during the roughly four-day return trip.

The mission will conclude with a high-speed reentry and splashdown in the Pacific Ocean off the coast of San Diego, where recovery teams from NASA and the Department of Defense will be on hand to retrieve the crew.

Artemis II follows the uncrewed Artemis I mission and is a crucial test of NASA’s deep-space systems before astronauts attempt a lunar landing on a future flight. NASA emphasizes that this mission is a key step toward long-term lunar exploration and eventual crewed missions to Mars, according to Fox News.

-+=