Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a novel electronic tattoo, or “e-tattoo,” designed to monitor mental workload in high-stress professions by measuring brain activity and cognitive performance.

In a groundbreaking study published in the journal Device, scientists have introduced an innovative electronic tattoo device, or “e-tattoo,” that can be applied to the forehead to help individuals in high-pressure work environments track their brainwaves and cognitive performance.

Dr. Nanshu Lu, the senior author of the research from the University of Texas at Austin, emphasized the significance of mental workload in human-in-the-loop systems, noting its direct influence on cognitive performance and decision-making. The e-tattoo is particularly aimed at professionals in demanding roles such as pilots, air traffic controllers, doctors, and emergency dispatchers.

According to Dr. Lu, the technology could also benefit emergency room doctors and operators of robots and drones, enhancing their training and performance. One of the primary objectives of the study was to develop a method for measuring cognitive fatigue in high-stakes careers.

The e-tattoo is designed to be temporarily affixed to the forehead and is notably smaller than existing devices. It operates by utilizing electroencephalogram (EEG) and electrooculogram (EOG) technology to measure brain waves and eye movements, offering a compact and cost-effective alternative to traditional EEG and EOG machines, which tend to be bulky and expensive.

Dr. Lu explained that the e-tattoo is “as thin and conformable to the skin as a temporary tattoo sticker,” making it a practical solution for real-time monitoring of mental workload. She highlighted that understanding human mental workload is essential in the fields of human-machine interaction and ergonomics due to its impact on cognitive performance.

The study involved six participants who were tasked with identifying letters displayed on a screen. The letters flashed one at a time in various locations, and participants were instructed to click a mouse if either the letter or its location matched a previously shown letter. Each participant completed the task multiple times, with varying levels of difficulty.

The researchers observed that as the difficulty of the tasks increased, the brainwave activity detected by the e-tattoo shifted, indicating a corresponding rise in mental workload. The device comprises a battery pack, reusable chips, and a disposable sensor, making it both practical and efficient.

Currently, the e-tattoo exists as a lab prototype, with a production cost of approximately $200. Dr. Lu noted that further development is necessary before it can be commercialized, including real-time mental workload decoding and validation in more realistic environments.

This innovative technology holds promise for enhancing performance and well-being in high-stress jobs, providing a new tool for monitoring cognitive load and potentially improving decision-making processes in critical situations.

For more information, refer to the study published in Device.

Hims & Hers Reports Breach of Customer Support System

Hims & Hers, a telehealth company, reported a data breach involving its customer support system, with hackers accessing personal information between February 4 and February 7, 2026.

Hims & Hers, a telehealth company specializing in weight loss medications and sexual health prescriptions, has confirmed a data breach affecting its third-party customer service platform. The company disclosed the incident in a notice filed with the California attorney general’s office on Thursday.

According to Hims & Hers, hackers infiltrated its third-party ticketing system between February 4 and February 7, stealing a significant number of support tickets that contained personal information submitted by customers. The breach notice indicated that the stolen data included customer names, contact information, and other unspecified personal details, which the company chose to redact in its communication.

While Hims & Hers assured customers that their medical records were not compromised, the nature of the customer support system means that the data could still contain sensitive information regarding individuals’ accounts and healthcare. The company has not disclosed the number of individuals affected by the breach. Under California law, companies must report data breaches that impact 500 or more residents of the state.

“Customer medical records were not impacted by this incident, and neither were communications with healthcare providers on the platform,” the company stated. Hims & Hers is currently reviewing its policies and procedures to prevent similar intrusions in the future and has notified federal law enforcement. The company will also inform regulators if required.

Jake Martin, a spokesperson for Hims & Hers, explained to TechCrunch that the breach was the result of a social engineering attack, where hackers deceived employees into granting access to their systems. He noted that the stolen data “primarily included customer names and email addresses.” However, the company did not specify the exact types of data taken when questioned by TechCrunch.

Additionally, Hims & Hers did not indicate whether it received any communication from the hackers, such as ransom demands. As of now, no hacking group has claimed responsibility for the attack, and the stolen data has not appeared publicly. Information generated by healthcare organizations is often highly sought after by criminals due to its potential for misuse in phishing and identity theft schemes.

In recent years, customer support and ticketing systems have become increasingly attractive targets for hackers. Financially motivated cybercriminals have been known to raid databases containing customer information and extort companies for ransom. For instance, last year, Discord experienced a data breach affecting its customer support ticketing system, which exposed government-issued IDs of approximately 70,000 individuals who had submitted their driver’s licenses and passports for age verification.

This incident underscores the growing risks associated with data security in the telehealth sector and highlights the importance of robust cybersecurity measures to protect sensitive customer information.

For more details, refer to TechCrunch.

Responsible AI Is Essential for Building Trust in a Fragmented World

Artur Turemka discusses the critical role of responsible AI in fostering trust and navigating regulatory challenges in the global fintech landscape during a recent podcast episode.

As artificial intelligence continues to transform payments, commerce, and global expansion, a pressing question emerges: how can businesses build a truly global platform amidst a landscape of local regulations? This topic was explored in depth on the “CAIO Connect” podcast, hosted by Sanjay Puri, featuring Artur Turemka, Chief Global Growth Officer at Autopay. The episode, recorded during the World Economic Forum in Davos, provides valuable insights into the intersection of AI, fintech, and regulatory frameworks in today’s digital economy.

Turemka operates at the forefront of fintech innovation and international growth. In his role at Autopay, he is tasked with expanding the company’s reach beyond Poland and Europe while maintaining the trust that is essential to financial services. Autopay specializes in facilitating seamless payments for merchants, ensuring that transactions are executed quickly, securely, and without interruption.

A pivotal moment in the podcast is Turemka’s introduction of the “Zero Delay Economy” concept. This initiative goes beyond merely expediting payments; it aims to provide merchants with greater freedom, independence, and time. Turemka emphasizes that when payment processes function smoothly, businesses can concentrate on what truly matters: fostering growth and enhancing customer relationships.

When Puri inquires about the role of AI at Autopay, Turemka makes it clear that AI is integrated throughout the organization. From fraud detection and transaction acceleration to enhancing internal productivity, AI plays a crucial role in driving efficiency at every level. In the realm of payments, AI bolsters trust by identifying anomalies and preventing fraudulent activities in real time. Additionally, it empowers employees by streamlining daily tasks and facilitating quicker decision-making.

The key takeaway from Turemka’s insights is straightforward yet impactful: AI should be utilized to enhance outcomes for both customers and teams, rather than being deployed merely for the sake of novelty.

Operating within the financial services sector entails navigating a landscape of stringent regulatory oversight. Turemka underscores the importance of compliance and data protection, stating that these priorities are paramount. Whether adhering to Polish regulations, European laws such as GDPR, or other jurisdiction-specific guidelines, Autopay is committed to ensuring that customer data is handled responsibly and ethically.

Given that AI systems often depend on extensive amounts of sensitive data, Turemka highlights a crucial leadership lesson: responsible AI is not optional in fintech; it is essential for establishing long-term trust.

One of the more candid moments in the podcast revolves around the challenges of regulation. While the aspiration is to create global platforms, Turemka acknowledges that unified global regulations are currently unrealistic. Instead, Autopay adopts a market-by-market approach, investing in compliance and drawing lessons from best practices across different regions.

Turemka notes that this strategy is not without its difficulties, but it is necessary for achieving global growth. Flexibility, patience, and a readiness to operate within diverse regulatory frameworks while upholding a consistent value proposition are critical components of success.

As a co-host of the Leaders Forum Poland, Turemka also shares insights into Poland’s emerging role on the global innovation stage. He advocates for viewing AI not through a national lens, but as part of a global ecosystem driven by talent, ambition, and collaboration. Poland’s increasing entrepreneurial success and economic momentum reflect this broader perspective.

In conclusion, Turemka leaves listeners with a powerful message: progress is rooted in dialogue and partnership. In times of complexity, breaking down barriers, collaborating across sectors, and remaining open to conversation are vital for driving meaningful innovation.

As the episode draws to a close, one theme resonates strongly: scaling AI and fintech on a global scale is not merely a technical challenge; it is fundamentally a human one. Ultimately, trust—more than technology—remains the most valuable currency in this evolving landscape.

According to The American Bazaar.

Artemis II Performs Key Lunar Burn for Historic Deep-Space Mission

The Artemis II mission has successfully transitioned to a lunar trajectory, marking a significant milestone in human space exploration with its four-member crew set for a historic journey.

The four-member crew of NASA’s Artemis II mission has successfully transitioned from Earth’s orbit to a lunar trajectory following a flawless translunar injection (TLI) burn. This maneuver, executed late Thursday, officially commits the Orion spacecraft to a high-stakes, eight-day journey that will carry humans to the vicinity of the moon for the first time since 1972. As the first crewed flight of the Space Launch System (SLS) rocket and the Orion capsule, Artemis II serves as a pivotal stress test for deep-space life-support systems and navigation. By the end of this mission, the crew is expected to set a new record for the farthest distance humans have ever traveled from Earth, surpassing the benchmark set by the Apollo 13 mission over five decades ago.

CAPE CANAVERAL, Fla. — NASA’s Artemis II mission entered its most ambitious phase on Thursday evening as the Orion spacecraft’s main engine fired for nearly six minutes, accelerating the vehicle to escape velocity and setting a course for the moon. The maneuver, known as the translunar injection (TLI) burn, took place approximately 25 hours after the mission’s historic liftoff from Kennedy Space Center’s Launch Complex 39B.

With the successful completion of the burn, the crew—Commander Reid Wiseman, Pilot Victor Glover, Mission Specialist Christina Koch, and Canadian Space Agency (CSA) Mission Specialist Jeremy Hansen—is now on a “free-return” trajectory. This orbital path ensures that the moon’s gravity will naturally pull the spacecraft around its far side and sling it back toward Earth for a Pacific Ocean splashdown, currently scheduled for April 10, 2026.

The Artemis II mission is designed to push the boundaries of human reach. While the Apollo missions of the late 1960s and early 1970s focused on lunar landings, Artemis II is a “shakedown” flight intended to validate the Orion spacecraft’s performance with a human crew. On the sixth day of the mission, the crew is projected to reach a point roughly 4,600 miles beyond the far side of the moon.

At its maximum distance, Orion will be approximately 230,000 miles from Earth. This will eclipse the standing record of 248,655 miles (400,171 kilometers) from Earth set by the crew of Apollo 13 in 1970, who were forced into a high-altitude lunar loop following an onboard explosion. Unlike the emergency nature of the 1970 record, the Artemis II trajectory is a deliberate test of the Space Launch System’s (SLS) precision and the Orion’s ability to sustain life in the harsh radiation environment of deep space.

“Humanity has once again shown what we are capable of, and it’s your hopes for the future that carry us now on this journey around the moon,” Jeremy Hansen said in his first address to Mission Control following the TLI burn. Hansen’s inclusion marks the first time a non-American has traveled beyond low-Earth orbit, a nod to the international coalition-building that defines the Artemis program.

The TLI burn utilized an Orbital Maneuvering System (OMS) engine with a storied pedigree. The engine used for this mission was salvaged and refurbished from the Space Shuttle program, having previously flown on 19 different shuttle missions. This hardware evolution underscores NASA’s strategy of blending legacy technology with modern computing power.

The Orion capsule itself offers a stark contrast to the Apollo-era Command Modules. While the Apollo capsules provided 210 cubic feet of habitable volume for three men, Orion provides 331 cubic feet—a 50% increase—to accommodate its four-member crew. This extra space is critical for the mission’s various objectives, which include testing a $23 million waste management system and exercise equipment designed to prevent bone density loss during longer voyages to Mars.

“With this burn to the moon, we do not leave Earth. We choose it,” Mission Specialist Christina Koch noted before the burn, emphasizing the mission’s role in gathering data to protect the home planet and its future explorers. Koch, who already holds the record for the longest single spaceflight by a woman, is now poised to become the first woman to reach the lunar vicinity.

The Artemis program represents a significant shift in U.S. space policy, moving away from the “flags and footprints” approach of the mid-20th century toward a sustainable lunar economy. This mission is the second of several planned phases, following the uncrewed Artemis I in 2022. It sets the stage for Artemis III and IV, which aim to land the first woman and person of color on the lunar surface later this decade.

However, the program faces intense scrutiny regarding its fiscal and temporal milestones. Originally slated for an earlier launch, Artemis II was delayed due to technical refinements and budget reallocations. The SLS rocket, standing 322 feet tall, carries a per-launch price tag estimated at $2.2 billion, part of a broader program that has seen costs climb into the tens of billions.

The geopolitical stakes are equally high. The United States is currently in a de facto space race with China, which has announced plans to land taikonauts on the moon by 2030. The Artemis Accords, a set of non-binding principles for space cooperation, now boast over 40 signatories, positioning Artemis II as a diplomatic tool as much as a scientific one.

As the crew settles into the “coast” phase of the mission, their daily schedule is packed with system checks. They have already addressed minor issues typical of a test flight, including a brief glitch in the communication system and a small leak in the waste management suction line, both of which were resolved by Mission Control in Houston.

Over the next 48 hours, the crew will focus on optical navigation, radiation monitoring, and CO2 scrubbing to ensure the life-support system effectively filters the air for four active adults over a prolonged period.

As Orion moves further away, the Earth will appear as a shrinking marble in the spacecraft’s windows. For Commander Reid Wiseman and his crew, the next eight days are not just a journey through the vacuum of space, but a bridge between the legacy of the 20th century and the aspirations of the 21st, according to NASA.

Banking Technology Data Breach Affects 672,000 Customers in Ransomware Attack

A ransomware attack on Marquis, a fintech company, has exposed sensitive personal and financial data of over 672,000 individuals, raising concerns about data security in the banking sector.

A recent ransomware attack on Marquis, a Texas-based fintech company, has compromised the personal and financial data of 672,075 individuals. This breach has raised alarms about the security of sensitive information held by third-party companies that support banking institutions.

Marquis, which provides data analytics tools to numerous banks, reported that hackers gained access to its systems in August 2025. The stolen data includes critical information such as names, dates of birth, home addresses, bank account details, debit and credit card numbers, and Social Security numbers. Such a combination of data can facilitate serious identity theft and fraud.

What makes this incident particularly concerning is that Marquis is not a household name, meaning many individuals may not have been aware that their data was stored with the company. The breach highlights the vulnerabilities that can exist within the banking ecosystem, especially when third-party vendors are involved.

In the wake of the attack, Marquis has filed a lawsuit against its firewall provider, SonicWall, alleging that a security flaw may have allowed the attackers to access critical configuration files. According to the lawsuit, these files provided hackers with a detailed map of Marquis’ network, which they exploited to steal data and deploy ransomware.

The lawsuit accuses SonicWall of failing to secure its cloud backup system, which allegedly exposed firewall configuration files, encrypted credentials, and detailed network architecture related to customer environments. Marquis claims that this level of access effectively gave the attackers a blueprint of its defenses. Furthermore, the complaint alleges that SonicWall was aware of the compromise to its cloud backup service but did not promptly disclose the full extent of the breach, initially reassuring customers that firewall protections were intact. This delay hindered Marquis’ ability to take timely protective measures.

In a statement, a spokesperson for Marquis detailed the company’s response to the incident. “In August 2025, Marquis Marketing Services identified a data security incident and immediately enacted our incident response protocols, including proactively taking affected systems offline to protect our data and our customers’ information,” the spokesperson said. “We engaged leading third-party cybersecurity experts to conduct a comprehensive investigation and notified law enforcement.” The spokesperson also noted that SonicWall later clarified that firewall configuration data and credentials associated with all customers using the cloud backup service had been accessed.

Experts warn that the exposure of firewall configuration files can significantly increase the risk of further attacks. These files serve as blueprints that can reveal vulnerabilities within a company’s defenses, allowing attackers to bypass security measures that would typically prevent unauthorized access.

Once inside the network, hackers can copy sensitive data and encrypt systems to demand a ransom. Even if the company manages to restore operations, the stolen data remains a significant threat, as criminals can use it to open credit cards, take out loans, or access bank accounts. Additionally, they can combine this data with other leaks to create convincing scams that may target victims through phone calls, emails, or messages that appear to be from legitimate sources.

Individuals concerned about their data being exposed in this breach are encouraged to take proactive measures to protect themselves against identity theft and fraud. One recommended step is to check if their email addresses have been compromised by visiting the website Have I Been Pwned. This resource allows users to see if their information appears in the recent data leak.

It is also advisable to secure important accounts, such as email and banking, by using strong, unique passwords that include a mix of letters, numbers, and symbols. Avoiding predictable choices, such as names or birthdays, and never reusing passwords can further enhance security. Utilizing a password manager can simplify the process of managing complex passwords and help identify any breaches.

Regularly monitoring financial transactions is crucial. Checking accounts frequently can help detect unauthorized charges early, as criminals often test accounts with small transactions before attempting larger withdrawals. If there is a possibility that a Social Security number has been exposed, placing a fraud alert or freezing credit can provide additional protection against identity theft.

Enabling two-factor authentication (2FA) for banking and email accounts adds an extra layer of security, making it more difficult for unauthorized individuals to access accounts even if they have the password. Keeping devices and applications updated with the latest security patches and installing trusted antivirus software can also help mitigate risks associated with malware and phishing scams.

This breach underscores a growing concern regarding the security of personal data held by third-party companies. As financial data is often shared across a network of vendors, the consequences of a security failure can extend beyond the initial company involved. The ongoing legal battle between Marquis and SonicWall raises important questions about accountability in the cybersecurity landscape, particularly when breaches expose sensitive information of hundreds of thousands of individuals.

As the situation develops, it remains critical for consumers to stay informed and take necessary precautions to protect their personal information. For more information on identity theft protection and data security, resources are available at CyberGuy.com, which offers insights and tools to help individuals safeguard their digital identities.

For further details on this incident, refer to Fox News.

CloudFront Service Disruption Affects Users Globally

The disruption of Amazon’s CloudFront service on October 11, 2023, highlighted vulnerabilities in digital infrastructure, affecting user access to numerous online platforms worldwide.

On October 11, 2023, a significant service disruption impacted users attempting to access various online platforms reliant on Amazon’s CloudFront, a widely utilized content delivery network (CDN). The incident resulted in a 403 error, which indicated that user requests could not be fulfilled, effectively blocking access to essential digital services. This event raises critical questions about the reliability of cloud-based infrastructures, particularly as digital operations become increasingly central to business functionality.

CloudFront, part of Amazon Web Services (AWS), is designed to optimize the delivery of data, applications, and APIs globally by reducing latency and enhancing transfer speeds. However, on this day, the service faced an unexpected surge in traffic, leading to widespread access issues. AWS reports that CloudFront supports millions of websites worldwide, underscoring the importance of its operational stability for businesses that depend on uninterrupted internet access.

The 403 error encountered by many users signifies that access to a resource is forbidden, indicating that CloudFront could not connect to the server hosting the requested application or content. This situation can arise from various factors, including server misconfigurations, excessive traffic loads, or issues with the origin server that CloudFront was trying to reach. The absence of an immediate explanation from AWS regarding the specific cause of the disruption led to speculation about the incident’s nature and its implications for users and businesses alike.

While the precise extent of the outage remains unclear, its potential impact is significant. Businesses utilizing CloudFront for service delivery could experience revenue losses, increased customer dissatisfaction, and reputational damage. Affected sectors included e-commerce, news media, and entertainment, where timely access to services is crucial. This incident serves as a stark reminder of the fragility inherent in cloud infrastructures, especially as reliance on such services continues to grow.

Historically, there have been several notable instances of severe outages in cloud services that resulted in widespread disruptions. For example, a similar AWS outage in June 2021 caused interruptions for major platforms like Netflix and Reddit. Such incidents have sparked discussions about the vulnerabilities associated with a concentrated reliance on a limited number of cloud service providers. Critics argue that these outages highlight the risks of single points of failure within the digital economy, emphasizing the need for more resilient infrastructure and diversified service strategies.

As users attempted to troubleshoot the access issues, reports indicated that the CloudFront error was not isolated to any single website or service. Instead, failures were reported across a broad spectrum of platforms, suggesting a systemic problem rather than isolated incidents. In response to the disruption, CloudFront’s official documentation advised users experiencing similar issues to check their configurations and optimize server settings for high traffic scenarios. This guidance aims to help mitigate the risks of future outages, but it also reflects the reality that businesses must be proactive in managing their digital infrastructure.

The disruption on October 11 serves as a critical reminder for stakeholders in the tech industry to reassess their reliance on cloud services. As digital traffic continues to surge, implementing fail-safes or alternative solutions may become essential for ensuring operational continuity. Companies could benefit from enhanced monitoring systems and robust contingency plans to address potential service disruptions.

Moreover, this incident could spark a broader conversation about the need for improved infrastructure resilience in the face of increasing digital demands. As businesses and consumers become more dependent on cloud services, the ability of these services to withstand unforeseen traffic spikes will be paramount in maintaining accessibility and reliability. The necessity for diversified cloud solutions, including hybrid approaches that combine on-premises and cloud resources, may become more pronounced in light of this incident.

In conclusion, the CloudFront service disruption on October 11, 2023, not only hindered user access but also underscored the vulnerabilities of heavily relying on a limited number of cloud service providers. As these technologies continue to evolve, the imperative for robust, resilient infrastructure will only intensify, shaping the future of digital accessibility and reliability in our increasingly interconnected world, according to Source Name.

Fake Google Meet Update Allows Hackers to Control Windows PCs

A new phishing scheme exploits a fake Google Meet update page to trick Windows users into granting hackers remote control of their computers.

A recent discovery by cybersecurity researchers has unveiled a sophisticated phishing tactic that targets Windows users through a counterfeit Google Meet update page. This deceptive scheme allows attackers to gain control of victims’ computers without the need for traditional malware or stolen passwords.

The fake update page, designed to resemble an official Google Meet notification, prompts users to click a button labeled “Update now.” However, instead of downloading a legitimate update, this action enrolls the user’s Windows computer in a remote management system controlled by the attackers.

Researchers from Malwarebytes, a cybersecurity firm known for its malware detection and removal software, identified this phishing website. The page employs familiar Google branding and colors, making it appear credible to unsuspecting users. Once a user clicks the “Update now” button, a built-in Windows feature is triggered, leading to a legitimate system window titled “Set up a work or school account.” This window typically appears when an IT department configures a device for an employee.

In this scam, the setup window is pre-filled with information that connects the computer to a remote management server controlled by the attacker. The system points to an online management service hosted on Esper, a legitimate platform used by businesses to manage their devices. If the victim proceeds through the setup process, their computer becomes enrolled in a mobile device management system, granting the attacker the same level of control that a corporate IT department would have over a work laptop.

Security experts note that attackers do not expect all users to complete the enrollment process. Even a small number of successful enrollments can provide enough access to make the campaign worthwhile.

This phishing attack exploits a legitimate Windows feature rather than relying on malware installation. Windows includes a device enrollment feature that allows companies to connect employee computers to a management system. Once a device is enrolled, administrators can remotely control various aspects of that machine. In a typical workplace, this functionality aids IT teams in installing software, enforcing security settings, and managing devices. However, attackers have found a way to trick users into joining their management system.

When users click the fake update button, Windows initiates a built-in enrollment process, which appears legitimate and can bypass many security warnings. If users complete the steps, the attacker effectively becomes the administrator of their computer, enabling them to silently install software, modify system settings, access files, lock screens, or even wipe the device entirely. Additionally, the attacker could install further malware at a later stage. Traditional antivirus tools may not detect any issues, as the operating system itself is executing the actions.

In response to inquiries, a Google spokesperson stated, “These ‘update now’ prompts are not legitimate Google communications. This is a phishing campaign that attempts to trick users into a Windows device enrollment process. Google Meet updates are handled automatically through your browser or the official app. Google will never prompt you to visit a third-party site to enroll a personal device to receive an update.”

To avoid falling victim to such scams, users are advised to exercise caution when encountering messages that prompt updates. It is essential to verify the legitimacy of such requests before proceeding. Major platforms rarely require updates through random web pages; legitimate Google Meet updates occur automatically through the browser or the official app and do not necessitate visiting third-party sites.

Users should always check the URL bar to ensure they are on the official Google Meet site, which is meet.google.com. A genuine update will not attempt to enroll an entire computer or trigger system-level setup screens. If such a prompt appears unexpectedly, it is likely a scam. Instead, users should access the service directly from its official website or app to check for updates.

On a Windows computer, users can navigate to Settings, then Accounts, and look for “Access work or school.” If they see an unfamiliar account or organization listed, especially one they do not recognize, they should disconnect it immediately. This section indicates whether a device has been enrolled in a remote management system.

Cybercriminals often leverage personal information available online to enhance the effectiveness of their phishing attacks. Data removal services can help eliminate personal information from data broker sites, reducing the likelihood of targeted attacks. While this may not prevent this specific phishing tactic, it can make individuals harder targets overall.

Google’s AI protections in Gmail block over 99.9% of spam, phishing, and malware, but scams can still reach users through search results, ads, or links shared outside their inbox. Therefore, employing robust antivirus software with real-time protection can help detect suspicious behavior that may arise after an attacker gains control of a device. Although this phishing attack utilizes legitimate Windows features, security tools can still identify unusual system changes or malicious software installed afterward.

Keeping software up to date is crucial, as updates often include security enhancements that help block new attack methods. Running the latest version of Windows and web browsers reduces the risk of attackers exploiting older system vulnerabilities.

Using a password manager can also enhance security by ensuring that login details are only autofilled on legitimate websites. If users encounter a phishing page masquerading as a service like Google Meet, their password manager will not fill in their information, serving as a warning that something is amiss.

If a Windows system window unexpectedly appears, asking users to set up a work or school account, they should stop immediately. Legitimate setup prompts typically arise when configuring a device or following employer instructions, not from clicking on random websites. If such a window appears without prior expectation, it should be closed immediately.

As cybercrime evolves, attackers increasingly exploit legitimate features embedded within operating systems and cloud services. In this instance, both Windows device enrollment and the management platform used are genuine tools designed for business use, which attackers have redirected toward unsuspecting individuals. This highlights the ease with which powerful enterprise features can be repurposed for malicious purposes in the absence of adequate safeguards.

For further information on this phishing scheme and to stay updated on cybersecurity best practices, visit CyberGuy.com.

Harvard Physicist Suggests Interstellar Object May Be Alien Probe

Harvard physicist Dr. Avi Loeb suggests that the interstellar object 3I/ATLAS may be an alien probe due to its unusual characteristics and trajectory.

A recently discovered interstellar object, designated 3I/ATLAS, has sparked intrigue among astronomers and scientists alike. Harvard physicist Dr. Avi Loeb posits that the object’s peculiar features could indicate it is more than a typical comet, potentially serving as a reconnaissance mission from an extraterrestrial source.

3I/ATLAS was first identified in early July by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Chile. This marks only the third time an interstellar object has been observed entering our solar system, according to NASA.

While NASA has classified 3I/ATLAS as a comet, Dr. Loeb points out that an image of the object reveals an unexpected glow in front of it, rather than the typical tail that comets exhibit. “Usually with comets, you have a tail, a cometary tail, where dust and gas are shining, reflecting sunlight, and that’s the signature of a comet,” he explained. “Here, you see a glow in front of it, not behind it.” This anomaly has raised questions about the object’s true nature.

Measuring approximately 20 kilometers across, 3I/ATLAS is larger than Manhattan and is notably bright given its distance from the sun. However, Dr. Loeb emphasizes that the most striking aspect of the object is its trajectory. He notes that if one were to imagine objects entering the solar system from random directions, only one in 500 would be aligned so precisely with the orbits of the planets.

Furthermore, 3I/ATLAS is expected to pass near Mars, Venus, and Jupiter, which Dr. Loeb argues is highly improbable to occur by chance. “It also comes close to each of them, with a probability of one in 20,000,” he stated.

The object is projected to reach its closest point to the sun, approximately 130 million miles away, on October 30. Dr. Loeb expresses the potential implications of the object’s technological origins, stating, “If it turns out to be technological, it would obviously have a big impact on the future of humanity. We have to decide how to respond to that.”

In a related context, earlier this year, astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics mistakenly identified a Tesla Roadster, launched into orbit by SpaceX CEO Elon Musk seven years ago, as an asteroid.

As the scientific community continues to analyze 3I/ATLAS, the implications of its characteristics and trajectory remain a topic of significant interest and debate. The possibility of it being an alien probe invites further investigation and discussion about our understanding of interstellar objects.

A spokesperson for NASA did not immediately respond to requests for comment regarding the findings and implications surrounding 3I/ATLAS, according to Fox News Digital.

CoreWeave Secures $8.5 Billion Loan for AI Infrastructure Growth

CoreWeave has secured an $8.5 billion loan to enhance its AI cloud infrastructure, reflecting strong market confidence in the growing demand for artificial intelligence.

CoreWeave, a cloud infrastructure specialist, has announced the acquisition of a delayed-draw term loan facility of up to $8.5 billion aimed at scaling its AI cloud infrastructure. The initial draw from this facility is approximately $7.5 billion, with an option to increase the total to $8.5 billion as the company stabilizes its data center assets.

The seven-year loan, which matures in March 2032, was arranged by Morgan Stanley and MUFG, with Blackstone Credit’s Insurance serving as the anchor. This significant financing milestone is part of a broader $28 billion raised by CoreWeave over the past 12 months, underscoring the strong market confidence in the demand for AI technologies.

CoreWeave plans to utilize the funds to fulfill major AI contracts and accelerate the expansion of its infrastructure. Brannin McBee, co-founder of CoreWeave, expressed pride in partnering with leading financial institutions for this landmark transaction, stating, “This reflects confidence in AI adoption and market validation of our model.”

The loan features a SOFR-based floating tranche at SOFR+2.25% and a fixed-rate tranche at approximately 5.9%. Specific covenants related to the loan were not disclosed.

Since completing its initial public offering (IPO) in March 2025, CoreWeave has rapidly expanded its operations, including a recent investment in a data center in the United Kingdom. The company reportedly holds an 18% share of the dedicated AI GPU market. This financing comes at a time when capital spending on AI infrastructure is experiencing a boom, with Bank of America and Reuters noting that U.S. data center investments have reached record highs as major tech companies invest billions into AI.

CoreWeave faces competition from both hyperscale cloud providers and smaller GPU-focused companies. For instance, Lambda Labs raised $480 million in early 2025 and secured a $500 million GPU-backed loan, while Crusoe Energy recently closed a $350 million Series C funding round and obtained $200 million in asset-backed financing.

However, high leverage poses risks, particularly if demand for AI slows or if supply chain disruptions affect GPU deliveries. CoreWeave will need to deploy its equipment swiftly to service contracts and manage debt refinancing as it continues to expand. The company’s next steps include drawing on the loan facility in the coming quarters to fund data center construction and chip purchases. Its progress will be closely monitored in relation to competitors and the broader AI market cycle.

According to American Bazaar, this loan marks a significant step for CoreWeave as it positions itself to meet the increasing demands of the AI sector.

FBI Email Hack Highlights Importance of Securing Technology

The recent hacking of FBI Director Kash Patel’s personal email highlights the urgent need for individuals to strengthen their cybersecurity practices.

In a concerning incident, the personal email account of FBI Director Kash Patel was hacked, with the Iranian group known as the Handala Hack Team claiming responsibility. While the FBI confirmed that no classified data was compromised, the breach underscores a significant vulnerability in personal cybersecurity.

The breach involved the unauthorized access to Patel’s personal email, revealing sensitive information such as photos, travel details, and older messages dating back over a decade, from 2011 to 2022. Although the FBI did not attribute the attack to a specific nation, the Handala Hack Team has publicly taken credit for the incident.

The FBI emphasized that no government or classified data was involved in this breach. In response to the threat posed by the Handala Hack Team, the U.S. State Department is offering a reward of up to $10 million for information leading to the identification of its members. Despite reaching out for comments, CyberGuy did not receive a response from the FBI before the article’s deadline.

A cybersecurity expert described the exposed material as akin to a “personal junk drawer,” a metaphor that resonates with many individuals who may have similar vulnerabilities in their own email accounts. The incident serves as a stark reminder that if even the head of the FBI can fall victim to hackers, ordinary users are equally at risk.

U.S. officials have long warned that foreign government-linked hackers, particularly those associated with Iran, have been targeting American citizens, especially those involved in government or political activities. Such cyberattacks often escalate during periods of geopolitical tension. Previous targets have included individuals connected to the Trump administration, as well as private companies, such as a recent incident involving a U.S. medical device company that faced operational disruptions due to hacking.

The shift in cyber warfare tactics is evident: personal accounts are now prime targets for hackers. This is largely because personal email accounts tend to have weaker security measures compared to official government systems. Many users rely on reused passwords, outdated security practices, and old email accounts, making them easier targets for malicious actors.

Once hackers gain access to an email account, they can exploit the information for various malicious purposes, potentially compromising not just the account itself but also associated accounts and personal data.

To mitigate these risks, individuals are encouraged to adopt stronger cybersecurity habits. One of the most effective defenses is enabling two-factor authentication (2FA) on email accounts. This additional layer of security requires a second code, making it significantly more difficult for hackers to gain access even if they have stolen a password.

It is also crucial to avoid reusing passwords across multiple accounts. A single breach can jeopardize an entire digital life. Utilizing a password manager to create unique passwords for each account can enhance security significantly.

Moreover, users should regularly review and delete unnecessary emails and documents that contain sensitive information, such as financial details or travel plans. Important files should be moved to secure locations rather than left in an inbox, which can be a tempting target for hackers.

As cyberattacks become increasingly sophisticated, hackers can leverage stolen data to craft convincing phishing emails that appear legitimate. Therefore, it is essential to verify links and sender addresses before clicking on any content. Employing robust antivirus software can also provide an additional layer of protection against suspicious activities.

Even with proactive measures, personal information may still be circulating on data broker sites, which collect and sell details like addresses and phone numbers. Using a data removal service can help mitigate this risk by requesting the removal of personal information from numerous sites, thereby reducing the amount of data available to potential attackers.

Keeping devices updated is another critical step in maintaining cybersecurity. Software updates often include patches for known vulnerabilities, and delaying these updates can leave systems exposed to exploitation.

Using different email accounts for various purposes—such as banking, shopping, and personal communication—can limit the damage if one account is compromised. Email aliases can also be beneficial; these alternate addresses forward to a primary inbox and can be disabled if they become a target for spam or hacking attempts.

Another emerging security measure is the use of passkeys, which replace traditional passwords with secure logins tied to devices or biometrics. This method is considered one of the safest ways to protect accounts, as passkeys cannot be reused or phished.

The landscape of cybersecurity is evolving, with adversaries demonstrating their capability to adapt and target both institutions and individuals. However, the most common entry point for hackers remains simple: weak passwords and outdated security practices. This reality emphasizes that the first line of defense against cyber threats is not solely the responsibility of government agencies but also lies with individual users.

As the threat of cyberattacks continues to grow, it is crucial for everyone to take proactive steps to secure their digital lives. For more information on how to enhance your cybersecurity practices, visit CyberGuy.com.

According to CyberGuy, adopting smarter habits today can significantly reduce the risk of falling victim to cyber threats.

Baseball Embraces Robot Umpire Challenges Amid Changing Landscape

Major League Baseball introduces the Automated Ball-Strike Challenge System, allowing players to challenge calls using technology, marking a significant shift in the game’s officiating.

For generations, baseball has adhered to a straightforward rule: the umpire’s call on balls and strikes is final. However, this season, Major League Baseball (MLB) is set to revolutionize the game with the introduction of the Automated Ball-Strike Challenge System, commonly referred to as the “robot ump.” This innovation allows players to challenge an umpire’s call, enabling technology to determine the outcome.

The Automated Ball-Strike Challenge System (ABS) employs advanced camera technology to meticulously track every pitch, creating a digital strike zone tailored to each batter’s height. While the system enhances accuracy, it does not fully relinquish control to machines. Instead, it operates as a hybrid model where human umpires continue to make calls on the field, but players now have the option to challenge those calls if they believe an error has been made.

High-speed cameras strategically positioned around the stadium capture the pitch in three dimensions, measuring its trajectory as it crosses home plate. This data is processed in milliseconds, allowing results to be displayed almost instantly on stadium screens. Scott Jacka, senior director of technology development strategy at T-Mobile, explained that the company’s private 5G network facilitates the rapid transmission of pitch data to the ABS operator, ensuring that results are relayed back to the field without delay.

Each team begins a game with two challenges, which can only be initiated by the pitcher, catcher, or batter—no assistance from the dugout is allowed. Players signal a challenge by tapping their heads, and within seconds, the stadium displays the pitch’s location and whether it was a ball or a strike. If the challenge is upheld, the team retains its challenge; if not, they lose one. This quick process has already become one of the most thrilling aspects of the game, with teams potentially receiving additional challenges during extra innings.

Reliability is a crucial consideration for any new system, and MLB designed ABS to deliver results swiftly, ensuring the game remains uninterrupted. In the event of a malfunction, the human umpire remains the ultimate authority, providing a safety net to maintain the flow of the game.

The technology behind the ABS system is powered by Hawk-Eye Innovations, which is also used in tennis and soccer for line calls and goal decisions. This established technology lends credibility to the system’s accuracy. T-Mobile supports the infrastructure necessary for the rapid delivery of results to both stadium displays and broadcast feeds.

Historically, contentious ball and strike calls have been a part of baseball, often becoming focal points of discussion among fans and players alike. However, as technology advances, there is a growing impatience with mistakes that could be easily rectified. MLB views the ABS system as a means to alleviate frustration without entirely removing the human element from the game.

The introduction of challenges adds a layer of tension to the game, as fans and players alike await the outcome of each call. Instead of prolonged debates over disputed calls, the ABS system provides immediate clarity, transforming potential controversies into moments of drama.

Early testing has revealed that the timing of challenges can be more critical than the specific calls being challenged. Players who use their challenges too early may find themselves at a disadvantage later in high-pressure situations. Emotions can also play a role, leading to impulsive decisions that could cost teams in crucial moments.

Not every pitch is straightforward to challenge. High-velocity pitches and those with significant movement can be particularly challenging to judge in real time. Even seasoned players may misinterpret a pitch by mere inches, complicating the decision to challenge.

This dynamic opens the door for players with exceptional plate discipline, such as Juan Soto, to leverage their skills strategically. Conversely, catchers face a shifting landscape; pitch framing—an art where catchers subtly position their gloves to influence the umpire’s call—will not disappear but will evolve as a strategic tool in conjunction with the ABS system.

Pitchers, on the other hand, may be less inclined to utilize the challenge system. Many believe they lack the best perspective on the strike zone during live play. Veteran players like Max Scherzer have raised broader questions about the extent to which technology should influence the game, a debate that remains unresolved.

Beyond officiating, the ABS system generates a wealth of data that teams can analyze in real time. This data can provide insights into pitch accuracy, player tendencies, and challenge success rates, potentially influencing coaching strategies and player evaluations.

While MLB has experimented with fully automated strike zones in the minor leagues, the traditional nature of baseball means many players and fans still value the human element behind the plate. They believe that the personality and judgment of umpires, along with their imperfections, contribute to the sport’s unique charm.

At present, the challenge system represents a compromise, addressing significant officiating errors while retaining the human touch that many cherish. As fans watch games unfold, they may notice a newfound fairness, with pivotal moments less likely to hinge on missed calls. The game is becoming more strategic, as players must weigh the timing of their challenges carefully, knowing that a single misstep could have lasting consequences.

In summary, baseball continues to evolve, integrating technology while striving to preserve its core essence. The robot ump challenge system enhances the game by empowering players to voice their concerns over calls, ultimately shaping a more transparent and engaging experience for fans. As the debate over technology’s role in baseball continues, one question remains: if technology can ensure accuracy, will fans embrace it over the traditional human umpire?

According to CyberGuy, the introduction of the ABS system marks a significant step forward in the evolution of baseball officiating.

Indian-American Satish Jha Discusses Technology and Ideas in Global Boardrooms

Satish Jha, a Boston-based journalist and edtech pioneer, discusses the thoughtful application of technology and its potential for social impact in a conversation reflecting on his diverse career journey.

Technology creates opportunity, but it must be applied thoughtfully, says Satish Jha, a Boston-based journalist, edtech pioneer, and investor who led the One Laptop per Child initiative in India.

Few careers move as seamlessly across journalism, global corporate leadership, investing, and social impact as that of Satish Jha. From co-founding Jansatta, one of India’s most influential Hindi dailies, and editing Dinamaan at the Times of India Group, to serving in CXO roles with Fortune 100 companies in Switzerland and the United States, Jha’s journey spans institutions, geographies, and ideas. In recent years, he has been an early-stage investor in numerous U.S. startups and a driving force behind technology-led social initiatives, including leading One Laptop per Child (OLPC) in India and supporting large-scale education efforts through the Vidyabharati Foundation of America and Ashraya.

Jha is also the author of *The Full Plate: India’s Education Revolution and the Race for Human Capital*, and he contributes a regular column to *The American Bazaar*.

In a wide-ranging conversation with Kesav Dama, Jha reflects on the formative influence of his upbringing and his years at Jawaharlal Nehru University, the bold decisions that helped build a modern Hindi newspaper from scratch, and the evolving role of journalism in an age of social media and misinformation. He also discusses his transition into global corporate leadership, his approach to investing, and his long-standing commitment to using technology to drive social impact—from rural development and digital infrastructure to energy, healthcare, and education.

At its core, the conversation returns to a few enduring themes: the power of ideas when paired with execution, the importance of humanizing technology, and the belief that while circumstances shape opportunity, they need not define outcomes. The interview has been edited for clarity.

Kesav Dama: You were born in Bihar and spent time in Lucknow and Varanasi. Tell us about your upbringing—especially your parents and their influence on you.

Satish Jha: My upbringing was shaped by two very different yet complementary influences. On my father’s side, there was a strong emphasis on education and scholarship. My grandfather was a professor of Sanskrit, and even though my father lost him at a very young age, that intellectual tradition continued in our household.

On my mother’s side, the family had a more aristocratic background—there were administrators, lawyers, and professionals of various kinds. It was a family that valued leadership and public life. So, in a way, I grew up at the intersection of intellectual rigor and social awareness. One side grounded me in discipline and learning; the other exposed me to ambition and public engagement. That combination stayed with me throughout my life.

Kesav Dama: Do you agree with the idea that where and when you are born largely determines your future?

Satish Jha: I would say it determines a significant part of it—perhaps 70-80 percent. Your environment, access, and early influences shape your opportunities. But I don’t think it is destiny. There is still room for agency, for effort, and for making choices that alter your trajectory.

Kesav Dama: You studied economics at Jawaharlal Nehru University (JNU) in the late 1970s. What was that experience like?

Satish Jha: At the time I was there, JNU was probably one of the most extraordinary academic environments in India. It brought together an incredibly talented group of students and thinkers. To give you a sense of that ecosystem—people from my extended academic circle went on to become global leaders. Abhijit Banerjee, who later won the Nobel Prize in Economics, was part of that intellectual milieu. Others went on to lead major institutions, join policymaking bodies, or build global corporations. JNU was not just about academics. It was about exposure to ideas—politics, economics, philosophy—and learning how to question, debate, and engage. That environment shaped how we thought about the world.

Kesav Dama: You’ve consistently worked at the intersection of technology and social impact. Why is that important to you?

Satish Jha: Technology, by itself, is just a tool. What matters is how societies absorb and use it. Different societies exist at different stages of development. Some create cutting-edge technologies, while others are still trying to absorb earlier innovations. Progress depends on how effectively a society can adopt and apply technology. If technology is too advanced for a society to absorb, it has little impact. If there is no access to technology at all, progress stalls. So the key is alignment—using the right level of technology to drive meaningful social outcomes. Technology is necessary for progress, but it is not sufficient. It must be humanized. It must serve people.

Kesav Dama: You co-founded a Hindi daily and scaled it rapidly. What were the key decisions that drove that success?

Satish Jha: I came into journalism without prior experience, which, in hindsight, was an advantage. I had no preconceived notions and was willing to experiment. One of the most important decisions we made was to adopt computers for publishing. At that time, no newspaper in India was fully composed using computers. We took that leap despite not knowing exactly how to implement it. The second key decision was about language. We chose to write in a way that ordinary people spoke—not in overly formal or translated Hindi. That made the newspaper accessible. We also focused on presentation—better layout, better readability, and a modern look. Combined with strong content and distribution support, it helped us stand out. In short, we were willing to take risks others were not willing to take.

Kesav Dama: How do you see the difference between traditional journalism and today’s social media-driven landscape?

Satish Jha: Journalism and social media are fundamentally different. Journalism is an institution. It operates within a framework of accountability, standards, and professional norms. Journalists are trained, and their work is subject to scrutiny. Social media, on the other hand, is a platform for expression. Anyone can publish anything. That democratization has value, but it also creates challenges—especially around misinformation.

Today, the biggest issue is not access to information—it is the ability to distinguish between what is real and what is not. Even I find myself questioning what I see. However, over time, people will adapt. They will learn to ask questions, verify sources, and use tools—including AI—to check authenticity. Progress is never linear. It is messy, but it moves forward.

Kesav Dama: With so much free content available, how can journalism remain financially viable?

Satish Jha: Journalism survives where there is demand. If people value credible information, they will pay for it—directly or indirectly. The challenge today is that attention is fragmented. But credibility still matters. In the long run, institutions that build trust will endure.

Kesav Dama: How did you transition from journalism into global corporate leadership?

Satish Jha: That transition happened largely because of circumstances and opportunities. When my wife moved to Geneva for her work with global health initiatives, I relocated as well. While there, I pursued further education and began exploring opportunities. I received offers from major global organizations, including leadership roles in technology and strategy. I chose a path that allowed me to work internationally and engage with global markets. One of my guiding principles was simple: if you give me a dollar, I will return more than a dollar. That mindset helped build trust.

Kesav Dama: You later moved into investing and entrepreneurship. How did that evolve?

Satish Jha: After years in corporate leadership and consulting, I began to understand how businesses are built and scaled. That naturally led to investing. I started investing in early-stage companies—particularly those working on technologies that could create new possibilities or make things cheaper, faster, or better. Over time, I made dozens of investments. Some succeeded, some didn’t. That’s the nature of early-stage investing. For me, investing is not just about returns. It is about people, ideas, and the potential to create impact.

Kesav Dama: What do you look for when deciding whether to invest in a startup?

Satish Jha: There are a few key criteria: sustainability, scalability, profitability potential, and impact. But beyond all that, it comes down to people. Do I believe in the founders? Do I understand the space? Does it excite me?

Kesav Dama: You’ve been involved in rural development initiatives since a young age. How did that shape your later work?

Satish Jha: I started working in rural areas when I was about 16 or 17. It wasn’t driven by a grand plan—it was more of an instinct to contribute. Later, when I worked on initiatives like Digital Partners India, the idea was to use technology to bridge gaps—especially where physical infrastructure was lacking. We talked about “digital highways” instead of physical roads. That idea later influenced various models adopted by corporations and governments.

Kesav Dama: You’ve been associated with ideas that resemble today’s digital infrastructure systems in India. How do you view that evolution?

Satish Jha: The core idea was always about simplifying access—using technology to connect identity, finance, and services. There are many ways to build such systems. Some are more efficient than others. What matters is usability, scalability, and cost-effectiveness. India has made significant progress, but there is always room for simplification.

Kesav Dama: Tell us about your work in energy and healthcare for underserved communities.

Satish Jha: In energy, we worked on decentralized systems—using biomass and local resources to generate power. The goal was to create small, self-sustaining units that could serve rural communities. In healthcare, we focused on digitizing patient data. We built systems where doctors could access a patient’s history through a digital platform—something that seems obvious today but was quite innovative at the time. Both efforts were about leveraging technology to solve real-world problems.

Kesav Dama: What is your vision for the future of education in India?

Satish Jha: Education is the single most powerful lever for societal transformation. The issue in India is not just access—it is quality. A large percentage of students are not receiving education that equips them for the future. The solution is not necessarily more spending—it is smarter spending. Technology can reduce costs and improve outcomes, but it must be applied effectively. If we invest meaningfully in education, the economic impact could be transformative.

Kesav Dama: You’ve mentored many entrepreneurs. What drives that?

Satish Jha: At this stage of my life, I feel a responsibility to contribute. I don’t look at mentorship as a structured activity. I engage where I feel I can make a difference—where my experience can help someone move forward. It’s not about scale. It’s about impact.

Kesav Dama: You’ve been closely associated with TiE. How do you see its role today?

Satish Jha: TiE has played an important role in building the startup ecosystem, especially in early-stage investing and mentorship. But ecosystems evolve. New institutions emerge to address new needs. TiE remains relevant, but it is part of a larger, multi-layered ecosystem.

Kesav Dama: How did you get involved with the One Laptop Per Child initiative?

Satish Jha: I was introduced to the initiative and felt it was being misunderstood—especially in India. I reached out, got involved, and eventually took responsibility for driving it in India. It was an extraordinary experience—both in terms of learning and impact. Not everything scaled the way we hoped, but the idea was powerful.

Kesav Dama: If you had to summarize your journey and message, what would it be?

Satish Jha: The message is simple: you can do it. Where you come from matters, but it does not define your limits. Technology creates opportunities, but it must be applied thoughtfully. And ultimately, progress happens when people connect ideas with action.

The interview highlights Jha’s belief in the transformative power of technology when used responsibly and effectively, underscoring the importance of human-centered approaches in driving social change, according to The American Bazaar.

Roblox Enhances Online Safety Measures Through Artificial Intelligence

Roblox is implementing a real-time AI moderation system to enhance online safety by analyzing avatars, text, and environments simultaneously across its platform.

Roblox, a popular online platform with over 144 million daily users, is introducing a new real-time AI moderation system aimed at detecting harmful content. This innovative approach analyzes avatars, text, and environments together, addressing the complexities of moderation in a user-generated ecosystem.

Unlike traditional moderation tools that evaluate individual elements in isolation, Roblox’s new system employs what is known as multimodal moderation. This method assesses the entire scene from the user’s perspective, capturing the interplay between 3D objects, avatars, and text in real time. Matt Kaufman, Roblox’s chief safety officer, explained the significance of this shift, stating, “We already moderate all of the objects in a virtual world, but how they come together and interact has long been a challenge.”

The challenge of moderation arises from the fact that harmful content can often be subtle and context-dependent. Kaufman noted, “Traditional AI moderation systems, which moderate one object at a time, can lack context and miss combinations that could be problematic in ways that the individual items are not.” This new system aims to fill that gap by understanding the relationships between different objects and how they interact, thus catching nuanced violations that standard filters might overlook.

Roblox’s multimodal moderation system is particularly focused on scenarios that have historically slipped through the cracks. For instance, in games that allow free-form drawing or avatar customization, a drawing or an avatar may seem harmless on its own. However, when combined, they could create inappropriate content. Kaufman elaborated, “The system can detect combinations of objects that may violate our community standards,” allowing for a more comprehensive assessment of user-generated content.

Currently, the implementation of this system is already yielding significant results, with Roblox reportedly shutting down around 5,000 servers daily for violations. Kaufman emphasized the scale of the platform, stating, “With 144 million users connecting and creating on Roblox every single day, our safety systems must be as agile and dynamic as our creators themselves.”

While the new system is designed to act swiftly against harmful behavior, Kaufman acknowledged that no system is entirely foolproof. “We are committed to doing our best to stay ahead of those attempting to bypass safety protocols,” he said, adding that the goal is to scale the multimodal system to monitor 100% of playtime.

For parents, this proactive approach to safety is a significant development. Instead of waiting for reports of inappropriate behavior, the system actively works in the background to identify and shut down problematic servers in real time. Kaufman reassured parents, “We want them to know that we aren’t just reacting to reports; we are proactively building some of the most sophisticated AI moderation systems in the world to help protect their children in real time.”

Roblox also emphasizes the importance of parental involvement in online safety. Parents are encouraged to engage with their children about the games they play and the people they interact with. Simple steps, such as reviewing account settings and discussing screen time rules, can further enhance safety.

Addressing concerns about false positives, Kaufman explained that Roblox is continuously evaluating the accuracy of its multimodal moderation system. “We have a continuous evaluation loop set up to measure false positives from the multimodal moderation system,” he said, indicating that user feedback plays a crucial role in refining the system.

Despite the reliance on advanced AI, Roblox maintains that human oversight remains essential. The platform employs a combination of AI and safety experts to review content before it is made available to users. The new system serves as an additional layer of protection, rather than a replacement for existing safety measures.

As with any powerful technology, questions about privacy and data usage arise. Roblox assures users that data collected for safety purposes is strictly limited to that function. The company is also committed to ensuring fairness and transparency in its safety systems, providing creators with insights into server shutdowns through a new dashboard feature.

Looking ahead, Roblox aims to enhance its moderation capabilities further, including the detection of recreations of real-world events that may violate community standards. Kaufman noted the importance of context in moderation, stating, “Standard filters might see a specific building or a line of text in isolation and not recognize a violation.” The goal is to understand the relationships between environments, avatars, and accompanying chat to improve safety.

This shift in approach represents a significant evolution in how online platforms manage safety. Rather than merely reacting to incidents after they occur, Roblox is striving to prevent harmful behavior before it reaches users. As AI continues to play a larger role in moderating online interactions, the balance between safety, fairness, and user freedom will become increasingly complex.

As the conversation around AI moderation evolves, it raises important questions about the level of control we are comfortable relinquishing to technology. For now, Roblox’s commitment to enhancing online safety through innovative AI solutions marks a promising step forward in creating a safer digital environment for its users.

According to CyberGuy, the implementation of this system is just the beginning, with future developments aimed at further refining the balance between safety and user experience.

Shatabdi Sharma Appointed Chief Information Officer at Capacity

Shatabdi Sharma has been appointed Chief Information Officer at Capacity LLC, where she will lead the company’s global technology strategy and oversee engineering teams in the U.S. and India.

Shatabdi Sharma, an Indian American technology executive, has joined Capacity LLC as the Chief Information Officer (CIO). In her new role, she will spearhead the company’s global technology strategy and manage engineering teams based in both the United States and India.

Sharma’s appointment comes at a pivotal time when logistics providers are increasingly investing in technology, data, and automation to navigate the complexities of retail and e-commerce distribution. Capacity, a leading fulfillment and logistics provider for high-growth consumer brands, views her leadership as a significant step in enhancing its operational capabilities.

According to a news release from the North Brunswick, New Jersey-based company, Sharma will concentrate on fortifying Capacity’s technology infrastructure, enhancing data and analytics capabilities, and ensuring the scalability of its systems.

With over two decades of experience in enterprise technology transformation across various sectors, including retail, consumer goods, and global supply chains, Sharma brings a wealth of knowledge to her new position. Most recently, she served as the Brand Technology Leader for Calvin Klein at PVH Corp, a global apparel company known for its brands like Calvin Klein and Tommy Hilfiger. In that role, she was instrumental in modernizing the brand’s end-to-end value chain, which encompasses product design, development, and planning through to delivery across a distributed global supply chain.

Sharma’s tenure at PVH also included roles as Vice President of Global Application Services and Director of Global E-commerce, where she led enterprise platforms that supported e-commerce, supply chain operations, and global business systems. Her previous experience includes technology leadership positions at Hitachi Consulting, Canon, Wegmans, and Home Depot, where she played a key role in modernizing ERP, warehouse management, order management, and integration systems across complex international operations.

In her new role at Capacity, Sharma aims to leverage the company’s strong foundation of operational expertise and institutional knowledge in fulfillment. “My focus is on building the technology strategy that amplifies that strength by integrating data, modern cloud infrastructure, and intelligent systems that allow us to scale while continuing to deliver transparency and efficiency for our partners,” she stated.

As CIO, Sharma will prioritize initiatives that unify data across systems, enhance analytics capabilities, and expand the use of emerging technologies, including AI-driven automation. Her strategic roadmap also emphasizes ongoing investments in security, governance, and workforce upskilling to ensure that the company’s technology teams are well-prepared for the next phase of growth.

Jeff Kaiden, Chief Executive Officer at Capacity, expressed confidence in Sharma’s capabilities, stating, “Shatabdi brings a rare combination of enterprise technology leadership and hands-on supply chain experience. Her perspective helps ensure our technology strategy continues to support the operational realities of fulfillment while positioning Capacity for the next generation of data-driven logistics.”

Sharma has also highlighted the importance of responsible technology adoption in Capacity’s approach. “AI and automation present tremendous opportunities, but they must be implemented thoughtfully,” she remarked. “At Capacity, we are focused on using technology to empower our teams and deliver better insights for our clients while maintaining strong governance and security practices.”

Beyond her technical expertise, Sharma is a passionate advocate for mentorship and diversity in the technology sector. She is actively involved with Extraordinary Women in Tech (EWiT) and has received several accolades, including the 2025 Top 20 Women We Admire Award and the ISG Women in Digital Silver Luminary Award.

Sharma holds a Master of Science in Computer Science, with a focus on Artificial Intelligence, from Utah State University, as well as a Bachelor of Engineering from Barkatullah University in Bhopal, India.

This appointment marks a significant milestone for Capacity as it continues to enhance its technological capabilities in the logistics industry, according to The American Bazaar.

Reddit VP Durgesh Kaushik Resigns to Launch Modveon, Secures $10M Funding

Durgesh Kaushik, former Vice President of Product at Reddit, has resigned to co-found Modveon, a startup focused on digital infrastructure, securing $10 million in initial funding.

Durgesh Kaushik, who served as Vice President of Product at Reddit for three and a half years, has announced his resignation to co-found a new venture named Modveon. This startup aims to address critical challenges in digital infrastructure for the future.

In a personal update shared on LinkedIn, Kaushik reflected on his time at Reddit, describing it as a period filled with significant learning and impactful experiences. He expressed gratitude to key figures at the company, including Pali Bhat, Steve Huffman, and Jen Wong, for their support and partnership. “Leading Product and International Growth at Reddit has been a masterclass in scale,” he stated, adding that he takes pride in helping make Reddit relevant to millions around the globe.

Kaushik’s departure marks a transition toward entrepreneurship, as he focuses on what he perceives as one of the most pressing challenges of the coming decade. He noted, “The internet is world-class at distribution, but the systems underneath it are still version 1.0. Identity is fragmented. Communication is noisy. Coordination is harder than it should be. Money movement is still far too broken in too many places.”

Modveon is positioned as a “verified operating system for modern nation-states and citizens,” aiming to fill gaps in identity, coordination, and financial systems. The startup has successfully raised $10 million in funding from investors, including Coinbase Ventures and Firebolt Ventures.

Kaushik explained the timing of the venture by highlighting the convergence of emerging technologies. “AI is becoming a new interface layer for how people navigate the digital world, and stablecoins are creating new rails for how value moves,” he wrote. He emphasized that both technologies become significantly more effective when built on trusted and verified systems, rather than fragmented ones.

He is co-founding Modveon alongside Nana Murugesan, who serves as CEO. The two share a long professional history, having previously worked together at Snapchat and Coinbase. “From our days scaling Snapchat to our time at Coinbase, we’ve built a decade of trust. There is no one I’d rather build with from the ground up,” Kaushik remarked.

Murugesan echoed Kaushik’s sentiments in a public response to the announcement. “Grateful to be building this with you Durgesh! We have done a lot together over the last decade, now we build what the next decade will run on. Excited for what’s ahead at Modveon,” he stated.

In the meantime, Steve Huffman, CEO of Reddit, has indicated that the company is looking to ramp up hiring of recent college graduates. This comes as parts of the tech sector pull back on entry-level recruitment amid the growing use of AI tools. Speaking on the Sourcery with Molly O’Shea podcast, Huffman noted, “The kids coming out of college right now learned how to program with AI. They’re really good at it, and so I think we will go heavy on new grads, because they’re so much more AI native.”

Kaushik’s move to launch Modveon represents a significant shift in his career, as he seeks to innovate within the digital landscape. His vision for the startup reflects a commitment to addressing foundational issues that have long plagued the internet.

According to The American Bazaar, the future of Modveon appears promising as it embarks on this ambitious journey.

Air Taxis Expected to Launch in the U.S. This Summer

New federal initiatives may pave the way for air taxis to operate in select U.S. cities as early as summer 2026, marking a significant step toward integrating electric vertical takeoff and landing (eVTOL) aircraft into everyday airspace.

For years, the concept of air taxis has lingered in the realm of futuristic technology, often described as “almost here.” With sleek designs and promises of quiet flights, lower costs, and the ability to bypass traffic, the anticipation has been palpable. However, the reality of air taxis may soon shift from concept to reality, thanks to a new federal initiative that could see electric air taxis taking to the skies as early as this summer.

This initiative represents the first program of its kind aimed at integrating air taxis into everyday U.S. airspace. While operations will not be widespread or fully scaled initially, the program is set to establish a foothold for air taxi services in various locations across the country.

Air taxis, also known as eVTOLs (electric vertical takeoff and landing vehicles), are small electric aircraft designed to take off and land vertically. They promise to transport passengers over short distances within urban areas, potentially allowing individuals to skip traffic and travel from one part of a city to another in mere minutes.

The appeal of air taxis is clear, but the journey to their introduction has been fraught with challenges. The primary obstacle has not been technological; rather, it has been regulatory. The Federal Aviation Administration (FAA) mandates that commercial aircraft adhere to stringent safety standards, with failure rates expected to align more closely with those of commercial airlines than with automobiles.

This regulatory landscape poses a challenge for eVTOLs, which are fundamentally different from traditional aircraft. Their unique design allows for vertical takeoff and landing, followed by a transition into forward flight, adding layers of complexity and risk. Companies such as Joby Aviation and Archer Aviation have invested years in testing their aircraft, logging thousands of flights, yet full regulatory approval has remained elusive.

In response to these challenges, the government has introduced the eVTOL Integration Pilot Program (eIPP), aimed at expediting the approval process without compromising safety standards. This program allows companies to initiate limited operations in designated areas rather than waiting for comprehensive nationwide approval. This shift in regulatory approach enables companies to demonstrate safety in real-world conditions and gradually expand their operations.

Eight pilot programs have already been approved across 26 states, creating one of the largest real-world testing environments for next-generation aircraft. These eVTOLs will not only transport passengers but will also facilitate cargo delivery, emergency medical response, and regional transportation. Data collected from these pilot programs will assist the FAA in developing new regulations to safely broaden the use of air taxis across the nation.

“This is the clearest sign yet from the White House, the FAA, and the DOT that bringing air taxis to market in the United States is a real priority,” said Adam Goldstein, founder and CEO of Archer. “We appreciate Secretary Duffy and Administrator Bedford’s leadership and are excited to bring Midnight to the skies of some of America’s largest cities.”

The push for air taxis is not merely about enhancing urban mobility; it is also a response to international competition. Countries like China have already made significant strides in drone technology and air mobility, with companies there conducting commercial passenger flights since 2023. The U.S. aims to reclaim its leadership position in this domain, accelerating innovation across both civilian and military sectors.

Many of the eVTOLs being developed are designed with autonomy in mind. Initially, pilots will be on board during flights, but the long-term vision is to eliminate the need for human pilots. This shift is driven by the desire to reduce weight, lower costs, and enhance scalability. Companies are actively testing automated systems capable of making complex flight decisions in real time, suggesting that the air taxis of the near future may differ significantly from their initial iterations.

While air taxis are unlikely to replace personal vehicles overnight, they could fundamentally alter urban transportation. For residents in major metropolitan areas, air taxis may soon offer a new option that significantly reduces travel time. Additionally, medical flights and disaster response could become faster and more efficient, potentially transforming emergency services.

Initially, rides may come at a premium price, but as the technology matures and demand increases, costs could align more closely with traditional rideshare services. The move toward autonomous air taxis could signal a broader transformation across various modes of transportation.

The timeline for air taxi operations is becoming clearer, with limited flights expected to commence as early as summer 2026. However, this does not imply that consumers will be able to book flights through an app immediately. Initial operations will likely focus on specific areas and applications.

Once the door to air taxi operations opens, expansion is expected to occur rapidly, similar to the trajectories seen with rideshare services and electric vehicles. “The first time I saw a Waymo on the road in San Francisco, it was a big deal. Now, self-driving cars are just part of everyday life there. I believe the eIPP will do the same thing for air taxis,” Goldstein added. “Every safe flight builds towards public acceptance, and we need to build that acceptance in parallel with our certification efforts.”

Air taxis have long been categorized as a technology on the verge of realization. Now, they are poised to enter the realm of practicality. Despite the challenges that remain—such as safety, cost, and infrastructure—the new regulatory approach is set to accelerate progress. As the public begins to experience this mode of travel firsthand, perceptions and expectations are likely to evolve rapidly.

If given the opportunity to bypass traffic and fly across your city in minutes, would you take the leap, or would you prefer to wait and see how others fare? Share your thoughts with us at Cyberguy.com.

According to Fox News.

Srikant Appointed to Lead National Center for Supercomputing Applications

R. Srikant, an IIT Madras alumnus, has been appointed the new director of the National Center for Supercomputing Applications, a leading hub for high-performance computing and data science.

Indian-born engineering scholar R. Srikant has taken the helm as the new director of the National Center for Supercomputing Applications (NCSA), one of the world’s foremost centers for high-performance computing and data science. His appointment marks a significant milestone for the center as it continues to play a crucial role in advancing research in various fields.

Srikant, who holds the Grainger Distinguished Chair in Engineering and is a professor at the University of Illinois Urbana-Champaign, officially assumed his role on January 1, 2026. He succeeds Bill Gropp, the previous director, and also serves as co-director of the C3.ai Digital Transformation Institute, which is a collaborative effort with the University of California, Berkeley.

His journey to leading NCSA began in India, where he established his academic foundation at the Indian Institute of Technology, Madras. After earning his undergraduate degree in 1985, Srikant moved to the United States to pursue advanced studies at the University of Illinois, where he joined the faculty in 1995.

Srikant’s deep connections to both his alma mater and his early education in India have significantly influenced his career, which is characterized by the integration of complex theoretical mathematics with practical technological applications.

His new role at NCSA comes at a critical juncture, as artificial intelligence and extensive data processing are becoming increasingly vital to global research initiatives. NCSA is tasked with providing the infrastructure necessary to support breakthroughs in diverse areas, including genomics and climate modeling.

“I’m very excited to begin this new journey with NCSA,” Srikant expressed. “My focus is on supporting our excellent researchers and staff, strengthening collaboration across the center, and ensuring that NCSA continues to thrive in its research, service, and impact missions.”

NCSA is not unfamiliar territory for Srikant. He previously served as the acting director of operations at NCSA for several months in 2023 and has engaged in numerous research collaborations between his home department and the high-performance computing experts at NCSA.

His research interests encompass a wide range of topics, including artificial intelligence, machine learning, communication networks, quantum computing, and applied probability. Srikant has received significant recognition for his work on the mathematical analysis and design of algorithms for the internet, wireless networks, and data centers. His accolades include the IEEE Koji Kobayashi Field Award for Computers and Communications and the ACM SIGMETRICS Achievement Award. Additionally, he is a fellow of the Institute of Electrical and Electronics Engineers (IEEE).

For Srikant, this new role represents a full-circle moment in a career that began with a degree in Chennai and has now culminated in a leadership position at a premier American computational research institution. His vision for NCSA is poised to drive innovation and collaboration in the rapidly evolving landscape of supercomputing and data science.

According to The American Bazaar, Srikant’s leadership is expected to enhance NCSA’s impact on research and technology development.

Indian-American Researchers Create Tool to Identify AI-Generated Radiology Reports

Three Indian American researchers at the University of Buffalo are developing a tool to detect AI-generated radiology reports, addressing concerns over falsified medical documentation and fraudulent insurance claims.

In an effort to combat the rising threat of falsified medical documentation and bogus insurance claims, a team of researchers from the University of Buffalo (UB) is developing a tool to identify AI-generated radiology reports. This initiative comes in response to the potential dangers posed by AI-generated medical reports, which can impersonate doctors or fabricate injuries in X-ray images, leading to significant issues within the medical and insurance sectors.

The UB team, led by Nalini Ratha, PhD, a SUNY Empire Innovation Professor in the Department of Computer Science and Engineering, believes they have created the first AI system specifically designed to differentiate between radiology reports authored by humans and those generated by artificial intelligence. “With generative AI becoming more capable of producing remarkably convincing radiology reports, there’s a greater risk of fabricated reports being used to falsify medical histories and support fraudulent claims,” Ratha explained.

Ratha emphasized the unique challenges posed by radiology reports, which possess a highly specialized structure, vocabulary, and stylistic norms that make general-purpose detection systems unreliable. “Therefore, our goal was to build a detection framework designed specifically for radiology that can distinguish clinician-written medical documentation from synthetic text before it reaches clinical or insurance workflows,” she added.

The research team, which includes PhD students Arjun Ramesh Kaushik and Tanvi Ranga, presented their findings in a study titled “Detecting Synthetic Radiology Reports Using Style Disentanglement” at the 2025 GenAI4Health workshop during the Conference on Neural Information Processing Systems held in San Diego in December.

As part of their research, the team compiled a dataset comprising 14,000 pairs of radiologist-authored and AI-generated chest X-ray reports. They employed two distinct methods to create the synthetic reports: paraphrasing actual radiologist reports using advanced large language models (LLMs) and generating complete reports directly from chest radiographs using medical vision-language models (VLMs).

This dataset is notable for being the first to integrate both text-based and image-based synthetic radiology reports, marking a significant advancement for trustworthy AI research in healthcare. The samples focused specifically on the findings section of the reports, which captures the radiologist’s detailed analysis and includes extensive domain-specific terminology and descriptive language.

“The findings section is both central to authorship attribution and the one most susceptible to exploitation,” Ratha noted.

The subsequent phase of their study involved developing an authorship-detection framework tailored to operate on this dataset. Although LLMs can replicate clinical terminology, they often struggle to mimic the stylistic characteristics inherent in human-authored radiology reports.

Recognizing this gap, the UB researchers devised a detection model based on BERT–Mamba technology, designed to separate each report’s stylistic features from its underlying clinical content. Their model demonstrated high accuracy and consistency, achieving Matthews correlation coefficient (MCC) scores ranging from 92% to 100% across both text-to-text and image-to-text categories. Furthermore, the framework proved effective in cross-LLM tests, accurately identifying AI-generated reports from models it had not previously encountered.

“What we found is that LLMs tend to write in polished, expansive language, while clinicians prefer concise, direct terms. For instance, radiologists use straightforward terms like ‘heart’ or ‘lung,’ whereas LLMs often opt for more elaborate phrases like ‘pulmonary vasculature.’ This distinction became a clear stylistic signal that our model learned to recognize,” Ranga explained.

Despite the promising results, the research team plans to continue refining both the dataset and the benchmark detection model in preparation for public release. They also envision that as AI systems become increasingly sophisticated and tailored to specific fields like radiology, these tools could significantly alleviate the workload for radiologists.

While the focus of their research is on radiology, Ratha believes the implications extend beyond healthcare. The style-based detection approach developed by the team could also be beneficial in safeguarding industries that are increasingly vulnerable to AI-generated forgeries, fabricated records, and synthetic narratives, including insurance, finance, journalism, education, and the legal profession.

According to The American Bazaar, this innovative research highlights the critical need for reliable detection methods as AI technology continues to evolve and integrate into various sectors.

Three Steps to Secure Your Email and Protect All Accounts

Account takeover fraud can devastate your finances, but implementing three key security measures can help protect your email and associated accounts from criminals.

Criminals no longer need your passwords to access your financial accounts; they simply need your email. This alarming trend has become a significant concern as account takeover fraud continues to rise.

Recently, a friend of mine, Lisa, experienced this firsthand when her PayPal account was drained, followed by her Amazon account, and an attempted breach of her bank account—all within 40 minutes. The criminals did not require her passwords; they only needed access to her email.

Consider the sensitive information that resides in your email inbox. It contains bank statements, medical results, retirement account details, mortgage information, and access to every streaming service and online store you have ever used. Perhaps most concerning is that every password reset link is sent directly to your inbox.

With access to your email, a criminal can easily reset the passwords for your other accounts. They simply visit your bank’s website, click “forgot password,” and enter your email address. The bank sends a reset link to your inbox, which the criminal can access if they are already inside your email. Within minutes, they can breach your Amazon, PayPal, brokerage, and health insurance accounts.

This type of fraud, known as account takeover fraud, cost Americans an estimated $2.7 billion last year. Disturbingly, 81% of victims reported believing they were “pretty careful” about their security before falling victim to this crime.

To safeguard your email, start by changing your password if it is under 16 characters or if you have reused it across multiple accounts. Consider using a password manager like NordPass, which generates complex passwords that are difficult to guess. You only need to remember one master password to access all your accounts securely.

Implementing two-factor authentication (2FA) is another crucial step. Even if someone steals your password, they cannot access your account without a second verification code. However, many people are unaware that SMS text codes can be intercepted through a method known as a SIM swap attack. In this scenario, a criminal convinces a customer service representative at your cell carrier to transfer your phone number to their device, allowing them to receive your “secure” text codes.

To enhance your security, switch to an authenticator app like Google Authenticator, which generates codes directly on your physical device rather than through your carrier. This change can be made in just a few minutes through your email account’s security settings.

Additionally, be mindful of the permissions you grant to third-party applications. Every time you use the “Sign in with Google” option to access a website or app, you may inadvertently give that app access to your email. Some applications can read your messages or even send emails on your behalf. Conduct an audit of your connected apps by visiting myaccount.google.com, navigating to the Security section, and reviewing third-party apps with account access. Revoke access to any apps you do not recognize or actively use.

While your bank may have a fraud department and your credit card may offer zero-liability protection, your email security is solely your responsibility. Taking these steps can significantly reduce your risk of falling victim to account takeover fraud.

In just twenty minutes, you can implement these three essential security measures. Lisa wishes she had taken these precautions during a quiet Sunday afternoon rather than in a state of panic on a Tuesday night.

Your email inbox can either be a secure fortress or an open door. Unlike your front door, it does not require a deadbolt—just strong security practices.

For more tips on staying safe online, visit Komando.com.

Robot Engages in Real-Time Tennis Matches with Human Players

A humanoid robot has demonstrated the ability to play tennis with a human in real time, utilizing AI technology to track and respond to shots without pre-programmed scripts or remote control.

A humanoid robot has made headlines by rallying tennis shots with a human player in real time. This innovative robot operates without a script or remote control, allowing it to react instantly on the tennis court.

Standing at approximately 4 feet tall, the robot features a compact, human-like frame. Developed by Galbot Robotics, a recent video showcased the robot engaging in a series of shots with a human opponent. The underlying technology, known as LATENT, operates on the Unitree G1 platform.

Unlike many athletic robots that follow pre-programmed routines or rely on remote control, this robot reacts dynamically to its human counterpart. It tracks fast-moving tennis balls, adjusts its position on the court, and returns shots with impressive accuracy. The robot is capable of adapting to changing trajectories and unpredictable shots during rallies, demonstrating significant advancements in robotic performance.

Researchers have noted that the robot can sustain long rallies with millisecond-level reaction times and full-body coordination, marking a major leap forward in robotic capabilities.

Training a robot to play tennis presents a complex challenge. Capturing comprehensive data on human gameplay is difficult, prompting researchers to adopt a different approach. Instead of recording entire matches, they concentrated on smaller segments of movement.

Over the course of their research, the team gathered approximately five hours of motion data from five players. These training sessions took place on a compact 10-by-16-foot court, which is more than 17 times smaller than a standard tennis court.

The robot’s ability to play tennis during live rallies is rooted in its learning process. Initially, the system learns individual movements, which are then combined into coordinated sequences. This method allows the robot to improve its performance significantly.

To further enhance its capabilities, the research team trained the model in simulated environments, varying physical conditions such as mass, friction, and aerodynamics. This simulation training enables the robot to adapt to real-world unpredictability, allowing it to respond dynamically rather than adhering to a fixed routine.

In testing, the system achieved an impressive success rate of up to 96% on forehand shots in simulation. In real-world trials, the robot has demonstrated the ability to sustain rallies with a human player and consistently return the ball over the net.

Observing the demonstration, the robot appears competitive, occasionally placing shots strategically away from the human player. This behavior suggests that the robot is capable of more than mere reaction; it indicates early forms of decision-making abilities.

Despite these advancements, there are still limitations. At times, the robot may appear unstable, and its movements are not yet as fluid as those of a trained athlete. Additionally, high or unpredictable shots can still pose challenges. Nevertheless, the progress made thus far is evident.

This breakthrough in robotics extends beyond the realm of tennis. It illustrates how robots can learn complex human skills without the necessity of perfect data. The methodologies employed in this research could potentially be applied to various tasks that lack complete motion data.

The future of robotic capabilities in sports is becoming increasingly clear. Today, the robot is able to rally; tomorrow, it may compete against human players. In the not-so-distant future, robots could train alongside or challenge professional athletes, and exhibition matches between humans and machines may become a regular feature in the sport.

This demonstration highlights the rapid advancements in robotic technology. Robots are no longer limited to following scripts; they can now react, adjust, and compete in real-time scenarios. What once seemed like a distant possibility is now becoming a reality.

The question remains: If a robot could outperform you on the tennis court, would you still be eager to compete, or would you prefer to train alongside it? Share your thoughts with us at Cyberguy.com.

According to CyberGuy, the implications of this technology could reshape not only sports but also various fields that require complex human-like skills.

AI Policy Changes in the U.S. May Impact Indian-American Tech Relations

The Trump administration’s new artificial intelligence framework aims to reshape U.S.-India tech relations by fostering innovation and addressing workforce development in the global AI landscape.

WASHINGTON, DC—The Trump administration has unveiled a national framework on artificial intelligence (AI), a move that could significantly influence Indian talent, IT firms, and policy discussions as the United States seeks to lead the global AI race.

In a six-point plan designed to enhance innovation, safeguard citizens, and reinforce U.S. leadership, the White House expressed its ambition to “win the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people.” The administration has urged Congress to enact this plan into law.

The framework addresses several critical areas, including child safety, economic growth, intellectual property, free speech, innovation, and workforce development. These components are closely intertwined with India’s role in the U.S. technology ecosystem.

“The Administration recognizes that some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children’s wellbeing or their monthly electricity bill,” the White House stated. It emphasized that these concerns “require strong Federal leadership to ensure the public’s trust in how AI is developed and used in their daily lives.”

For Indian-origin professionals, the emphasis on cultivating an “AI-ready workforce” is particularly significant. A substantial number of Indians are employed in U.S. technology sectors. The plan advocates for enhanced training and skills development, asserting that workers should “participate in and reap the rewards of AI-driven growth.”

This policy shift is also crucial for India’s IT services sector, which plays a vital role in supporting global AI systems through engineering and data-related work. The administration aims to eliminate “outdated or unnecessary barriers to innovation” and expedite the adoption of AI across various industries, potentially increasing demand for international tech partnerships.

Moreover, the plan places a strong emphasis on data centers and energy management. The White House remarked, “ratepayers should not foot the bill for data centers,” urging Congress to expedite approval processes. It also encourages companies to generate power on-site, as the expansion of AI infrastructure could impact global supply chains connected to India.

On the matter of intellectual property, the administration seeks a balanced approach. It stated that “the creative works and unique identities of American innovators, creators, and publishers must be respected in the age of AI,” while also asserting that AI systems should have the ability to learn from available data.

The framework further underscores the importance of free speech, with the White House asserting that “AI cannot become a vehicle for government to dictate right and wrong-think.” It calls for safeguards to protect lawful expression from censorship.

Another critical aspect of the plan is the establishment of a single national policy. The administration cautioned that “a patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.” A uniform regulatory system could benefit Indian firms operating across various U.S. states.

The White House has committed to collaborating with Congress to pass this legislation, emphasizing the necessity for the federal government to establish clear national rules for AI.

As governments worldwide race to regulate AI, the United States and China are at the forefront of this competition. The implications of AI are increasingly linked to economic power and national security.

India is also making strides in expanding its AI ecosystem, investing in technology while maintaining flexible regulations. Decisions made in Washington are likely to set global standards, compelling Indian firms and professionals to adapt to these evolving changes.

According to IANS, the developments in U.S. AI policy will have far-reaching effects on international tech collaborations and workforce dynamics.

SEC Concludes Four-Year Investigation into EV Startup Faraday Future

The SEC has officially closed its four-year investigation into electric vehicle startup Faraday Future, marking a significant moment in the agency’s enforcement history.

The United States Securities and Exchange Commission (SEC) has concluded its investigation into electric vehicle startup Faraday Future, a decision that comes after a lengthy four-year probe. The investigation focused on allegations that the company made “false and misleading statements” following its public debut through a merger with a special purpose acquisition company (SPAC) in 2021.

During the investigation, the SEC scrutinized claims made by Faraday Future regarding the sales of its first electric vehicles, which were reportedly fabricated according to at least three whistleblowers who were former employees of the company. The SEC’s inquiry included multiple subpoenas and depositions of former employees and executives throughout 2024 and 2025.

In July 2025, Faraday Future disclosed that the SEC had issued “Wells Notices” to the company and several of its executives, including founder Jia Yueting. A Wells Notice is a formal communication from the SEC indicating that the agency’s staff has found sufficient grounds to recommend enforcement action.

In light of the SEC’s decision to close the investigation, Yueting expressed relief, stating, “We can now put all our energy into strategy execution. Over the past five years, we had to spend a great deal of time, effort, and money on cooperating with the investigation.” Faraday Future also confirmed that the SEC would not pursue any further action against its executives.

Despite the closure of the investigation, it remains unclear whether Faraday Future responded to the Wells Notices issued last year. As of February, the company indicated in regulatory filings that it had not yet done so, although it planned to engage with the SEC to argue that enforcement action was unwarranted.

Additionally, the U.S. Department of Justice (DOJ) had sought information from Faraday Future following the SEC’s initiation of its investigation in 2022. However, the company has referred to this as an “investigation” in its regulatory filings, while there has been no confirmation from the DOJ regarding any ongoing inquiry.

Historically, the SEC tends to pursue enforcement actions after issuing Wells Notices. A study conducted by the Wharton School in 2020 indicated that approximately 85% of targets receiving a Wells Notice ultimately face legal action from the SEC.

In recent years, the SEC has investigated numerous electric vehicle startups that went public via SPAC mergers. While many of these investigations have resulted in settlements, the agency has also dismissed probes into companies like Lucid Motors in 2023 and Fisker in 2025.

As Faraday Future moves forward without the burden of the SEC investigation, the company will likely focus on its strategic goals and the development of its electric vehicle offerings.

According to The American Bazaar, the closure of this investigation marks a pivotal moment for Faraday Future as it seeks to establish itself in the competitive electric vehicle market.

Astronauts Arrive at ISS for Eight-Month Mission After Medical Emergency

Four astronauts have arrived at the International Space Station for an eight-month mission, following a recent medical emergency that led to an early evacuation of some crew members.

Four new astronauts have successfully arrived at the International Space Station (ISS), restoring the facility to full capacity after a recent medical emergency forced an early evacuation of several crew members. The international team, which includes NASA Commander Jessica Meir, launched from Cape Canaveral aboard a SpaceX rocket on Friday, embarking on a journey that lasted approximately 34 hours.

<p”That was quite the ride,” Meir remarked shortly after the launch, as reported by BBC News. “We have left the Earth, but the Earth has not left us.” The launch had been delayed twice prior due to weather concerns.

Joining Meir for the upcoming eight to nine months on the ISS are NASA astronaut Jack Hathaway, France’s Sophie Adenot, and Russian cosmonaut Andrei Fedyaev. Both Meir and Fedyaev are seasoned space travelers, having previously visited the ISS. Notably, Meir participated in the first all-female spacewalk in 2019. Adenot, a military helicopter pilot, is only the second French woman to travel to space, while Hathaway holds the rank of captain in the U.S. Navy.

The spacecraft is expected to autonomously dock with the space station’s Harmony module at 3:15 p.m. CT on Saturday, traveling at a speed of 17,000 mph in Earth orbit. “What an absolutely wonderful start to the day,” said NASA Administrator Jared Isaacman following the launch. “This mission has shown in many ways what it means to be mission-focused at NASA.”

Isaacman also highlighted the recent adjustments made to the crew schedule, stating, “In the last couple of weeks, we brought Crew-11 home early, we pulled forward Crew-12 to the launch date today, all while simultaneously making preparations for the Artemis 2 mission, which its next window will open up in early March.”

This flight marks the 12th crew rotation with SpaceX as part of NASA’s Commercial Crew Program. Crew-12 will engage in scientific investigations and technology demonstrations aimed at preparing humans for future exploration missions to the Moon and Mars, while also benefiting life on Earth.

NASA confirmed that the capsule’s hatch opened at 4:14 p.m. CT after docking with the ISS. “We are so excited to be here and get to work,” Meir stated upon the crew’s arrival. Adenot shared her awe, saying, “The first time we looked at the Earth was mindblowing. … We saw no lines, no borders.”

Prior to the arrival of the new crew, only one American and two Russians remained aboard the ISS, ensuring the station continued to operate smoothly. The medical evacuation that took place in January was a significant event, marking the first such incident in 65 years. NASA reported that a crew member experienced a serious health issue, but the agency has not disclosed the nature of the condition or the name of the astronaut involved, citing medical privacy.

The astronaut who faced the medical emergency, along with three other crew members who had launched together, returned to Earth more than a month earlier than planned after the decision was made to bring them home.

According to The Associated Press, the successful arrival of the new crew marks a significant step forward for the ISS and its ongoing scientific missions.

Fake Google Security Page Can Compromise Your Browser’s Privacy

A new phishing scam impersonating Google is tricking users into installing malware that can steal sensitive information and spy on their devices.

Security researchers have uncovered a phishing scam that masquerades as a Google security check, tricking individuals into installing malware designed to steal two-factor authentication (2FA) codes, track locations, and monitor clipboard data.

The fraudulent page presents itself as a legitimate Google security alert, claiming that users need to enhance their account protection. It guides visitors through a seemingly straightforward setup process aimed at bolstering their security and safeguarding their devices. However, those who follow the instructions may unwittingly install what appears to be a harmless security tool, which, in reality, is a malicious web application capable of spying on their devices.

According to security experts, this malicious app can capture login verification codes, monitor clipboard activity, track GPS location, and reroute internet traffic through the user’s browser. The most alarming aspect of this scam is that it does not exploit any software vulnerabilities; instead, it relies on social engineering to trick users into granting the necessary permissions. Once these permissions are granted, the user’s own browser can be manipulated to serve the attackers’ purposes without their knowledge.

Researchers at Malwarebytes, a cybersecurity firm, recently identified a phishing website that imitates Google’s account protection system. This site, operating under the domain google-prism[.]com, presents a convincing security page that prompts users to complete a brief verification process. Visitors are instructed to undertake a four-step setup to enhance their account security, which purportedly protects their devices from various threats.

During this process, users are asked to approve multiple permissions and install what is claimed to be a security tool. The application installed is actually a Progressive Web App (PWA), which runs through the browser but functions like a native application on a computer. It can open in its own window, send notifications, and perform tasks in the background.

Once installed, the malicious web app can gather contacts, read clipboard information, track GPS location data, and attempt to capture one-time login codes sent to users’ phones. These codes are commonly used for accounts that implement two-factor authentication.

Additionally, the fake security page may offer an Android companion app described as a “critical security update.” Researchers have noted that this app requests an alarming 33 permissions, including access to text messages, call logs, contacts, microphone recordings, and accessibility features. Such extensive permissions enable attackers to read messages, capture keystrokes, monitor notifications, and maintain control over various aspects of the device. Even if the Android app is not installed, the web app alone can still collect sensitive information and operate quietly through the user’s browser.

The effectiveness of this scam lies in its ability to mimic trusted sources. Many individuals expect security alerts from the services they utilize, particularly regarding the protection of their email or cloud accounts. Attackers exploit this trust by presenting the fake page as a beneficial security feature. When users approve the permissions and install the web app, they inadvertently grant attackers access to specific areas of their devices. One of the primary targets for these attackers is the capture of one-time passwords, which are essential for logging into accounts that require two-factor authentication.

If attackers successfully capture these codes while also knowing the user’s password, they may gain access to various accounts, including email, financial services, or cryptocurrency wallets. The malware’s capability to monitor clipboard activity is particularly concerning, as individuals often copy cryptocurrency wallet addresses before conducting transactions, making this information valuable to criminals.

Another feature of the malicious app allows attackers to route internet requests through the user’s browser, making it appear as though online activity originates from the user’s home network. The app can also send notifications that mimic security alerts or system warnings. When users click on these notifications, the app reopens, providing another opportunity to capture sensitive information such as login codes or clipboard data.

In response to inquiries about this phishing campaign, a Google spokesperson confirmed that several built-in security systems are in place to thwart threats like this before they can inflict harm. “We can confirm that Safe Browsing in Chrome warns any user who tries to visit this site,” the spokesperson stated. “Chrome also shows a confirmation dialog whenever anyone attempts to download an APK. Android users are automatically protected against known versions of this malware by Google Play Protect, which is enabled by default on Android devices with Google Play Services.”

Google also indicated that its current monitoring shows no apps containing this malware are available on the Google Play Store. Even if malicious apps are installed from outside official stores, Google asserts that Android devices have an additional layer of protection. Google Play Protect can alert users or block apps known to exhibit malicious behavior, including those installed from third-party sources.

However, it is crucial to recognize that Google Play Protect may not be foolproof. Historically, it has not always been 100% effective in removing all known malware from Android devices. Therefore, experts recommend using robust antivirus software to detect malicious downloads, suspicious browser activity, and phishing attempts before they can cause significant damage. Such software acts as an early warning system, helping to block dangerous apps and websites before they can access your device or data.

To avoid falling victim to a suspicious “security check,” users should adopt a few simple habits to protect their accounts and devices. Google does not request the installation of security tools through pop-ups or unfamiliar websites. If a page claims that an account requires a security check, users should close the tab and navigate directly to Google’s official account page by typing the address manually. This approach prevents attackers from redirecting users to a fraudulent site.

Phishing pages often utilize domains that closely resemble those of legitimate companies. Attackers rely on users clicking quickly without scrutinizing the address bar. If the website address does not belong to an official Google domain, it should not be trusted. Even minor alterations in spelling can indicate a fake site designed to steal information.

If users have installed an app through a website and it opens like a standalone program, they should check their browser’s installed apps or extensions list. Removing any unfamiliar or unrecognized items immediately can prevent further information collection or command execution through the browser.

Researchers warn that the malicious Android app may appear under names such as “Security Check” or “System Service.” If users encounter unfamiliar apps with these names, they should review the permissions requested and remove them if they seem suspicious. Apps requesting extensive permissions, such as SMS access, accessibility features, and microphone control, should always be scrutinized.

Using a password manager can help create and store strong, unique passwords for every online account. If attackers obtain one password, they will not automatically gain access to other accounts. Password managers also help prevent users from entering credentials on fake sites, as they typically refuse to auto-fill on lookalike domains.

Two-factor authentication (2FA) adds an extra layer of security beyond passwords. Although this attack aims to capture SMS verification codes, many services allow the use of authenticator apps instead. These apps generate login codes directly on the user’s device, making it significantly more challenging for attackers to intercept them.

If users suspect they have interacted with a dubious security page, they should closely monitor their accounts in the following days for login alerts, password reset emails, or unfamiliar transactions. Prompt action in response to suspicious activity can help prevent attackers from gaining full control over accounts.

Scammers often gather personal information from data broker sites to craft convincing phishing messages. Utilizing a data removal service can assist in removing personal information from these databases, thereby reducing the amount of data criminals can exploit to impersonate companies or create targeted scams.

As attackers evolve their tactics, they are increasingly relying on convincing security messages to persuade individuals to install malicious tools themselves, rather than exploiting technical flaws. Given the reliance on familiar brands like Google for security decisions, it is essential to enhance safeguards against impersonation sites and improve the regulations surrounding the capabilities of installed web apps.

For more information on cybersecurity and to stay updated on potential threats, visit CyberGuy.com.

Hospital Cyberattacks Raise Concerns Over Patient Safety and Care

Hospital cyberattacks pose significant risks to patient safety, disrupting care and exposing sensitive medical data, as highlighted by security expert Ricardo Amper.

Recent episodes of medical dramas may dramatize the chaos of a hospital cyberattack, but for many healthcare facilities, these scenarios are all too real. In Mississippi, the University of Mississippi Medical Center experienced a ransomware attack that forced clinics statewide to close, canceled elective procedures, and disrupted access to electronic medical records. While emergency care continued, the incident underscored a growing concern: hospital cyberattacks are not merely a technical issue but a serious public safety threat.

According to Ricardo Amper, founder and CEO of Incode Technologies, a digital identity verification and biometric authentication company, hospitals are uniquely vulnerable to cyber threats. “If systems go down, patient care is immediately affected,” he explained. The urgency to restore operations quickly often makes healthcare facilities prime targets for ransomware groups. Amper notes that hospitals house some of the most sensitive data, including medical records, identity information, and insurance details, making them attractive targets for cybercriminals.

Moreover, the interconnected nature of healthcare systems means that vulnerabilities can arise from third-party vendors and service providers. “In healthcare, you’re only as secure as the entire ecosystem around you,” Amper stated. While many people envision hackers breaching firewalls, the reality is shifting. Increasingly, attackers are employing social engineering tactics to exploit human trust rather than technical weaknesses.

Artificial intelligence (AI) has made it easier for criminals to impersonate trusted individuals. They can clone voices, generate convincing emails, or create deepfake videos that appear to come from legitimate sources, such as doctors or IT administrators. “AI doesn’t replace social engineering; it supercharges it,” Amper remarked. This means that an employee might receive what seems to be a legitimate request to reset a password or approve a login, leading to a potential breach with just one click.

In the fast-paced environment of a hospital, speed is essential. Healthcare professionals are often focused on patient care, which can create openings for attackers who rely on deception. “That urgency can make it easier for attackers to exploit trust or distraction,” Amper noted. Additionally, many hospitals operate with legacy systems that have been layered over time, increasing complexity and risk. Amper challenges the notion that cybersecurity is solely an IT issue, emphasizing that it is fundamentally about operational resilience.

When a hospital’s systems are compromised, the fallout can be extensive. Exposed data may include not only credit card numbers but also medical histories, Social Security numbers, insurance information, and contact details. This combination can lead to identity fraud, insurance fraud, and targeted scams. Unlike credit cards, stolen medical identities cannot simply be replaced, making them particularly valuable in criminal markets. The effects of a breach may not be immediate; they can emerge months or even years later.

As identity theft becomes increasingly prevalent, Amper highlights the importance of robust identity verification measures. “Identity has become the front line of cybersecurity,” he stated. If an attacker can successfully impersonate a trusted user, many traditional defenses can be bypassed. Hospitals must implement stronger identity verification, layered authentication, and systems capable of detecting impersonation or deepfakes to safeguard against these threats.

For patients concerned about the security of their data following a breach, there are steps they can take. One proactive measure is to check if their email address appears in known data breaches by visiting haveibeenpwned.com. If an email is found in a breach, it is crucial to act quickly by changing passwords for affected accounts and ensuring that each account uses a unique password.

Receiving a breach notification letter can be alarming, but Amper advises patients to remain calm and take it seriously. “Read the notice carefully and enroll in any credit or identity monitoring services offered,” he suggests. If something feels off, patients should contact the hospital directly using official contact information rather than relying on links or numbers provided in unexpected messages. He emphasizes the importance of treating medical identity with the same seriousness as financial identity, urging individuals to monitor their records and remain vigilant.

The consequences of hospital cyberattacks extend beyond stolen records; they affect entire communities. Appointments are canceled, surgeries are delayed, and families are left in uncertainty. This situation raises an uncomfortable question: if your local hospital were to go offline tomorrow, would you trust that your medical identity and care are adequately protected?

As technology continues to transform healthcare, the challenge lies in building resilience into every layer of care. The next cyberattack will not feel like a scripted drama; it will have real-world implications for patient safety and trust in the healthcare system. Taking proactive measures today can help prevent long-term identity damage in the future.

For more insights on cybersecurity and protecting personal information, visit CyberGuy.com.

Wall-Climbing Robots Assist US Navy Warships, Fox News Reports

Wall-climbing robots are now crawling on U.S. Navy warships, marking a significant advancement in naval technology amid rising tensions with China.

The Fox News AI Newsletter provides insights into the latest advancements in artificial intelligence technology, highlighting both the challenges and opportunities that AI presents in various sectors.

In a recent report, Fox News Digital showcased a groundbreaking development in naval technology: wall-climbing robot swarms that are now being deployed on U.S. Navy warships. This innovation comes at a pivotal moment as the U.S. faces an expanding naval fleet from China, which is rapidly increasing in size and capability.

In addition to military advancements, the economic implications of artificial intelligence are also a topic of discussion. An opinion piece in the newsletter argues that the costs associated with AI development, including its extensive energy requirements, will ultimately be passed down to consumers. This raises concerns about who will bear the financial burden of the growing AI industry.

On the corporate front, Dell Technologies has announced a 10% reduction in its workforce for the third consecutive year. This trend reflects ongoing shifts in economic conditions and corporate restructuring within the technology sector, as reported by Fox Business.

In the realm of aviation, Merlin CEO Matt George discussed advancements in AI pilot technology, which aims to enable military and commercial aircraft to operate fully autonomously. This development was highlighted during an appearance on Fox Business’ ‘The Claman Countdown.’

The impact of AI extends beyond military and corporate applications. Homebuyers and sellers are increasingly turning to AI chatbots for guidance in real estate transactions. Experts Lou Basenese and Kirsten Jordan shared insights on this trend during a segment on ‘Fox Business In Depth.’

Fox Business host Charles Payne also addressed the broader economic implications of AI, emphasizing that disruption is already occurring across various industries. His commentary on ‘Making Money’ reflects the growing recognition of AI’s transformative potential.

Entrepreneurship is another area where AI is making waves. Angie Hicks, co-founder of Angi, discussed her journey in building a home services giant and the role AI plays in her business strategy during an interview on ‘Mornings with Maria.’

For those interested in staying updated on the latest developments in AI technology, the Fox News AI Newsletter offers a comprehensive overview of the challenges and opportunities that lie ahead.

According to Fox News, these advancements in AI and technology are reshaping industries and influencing economic dynamics in significant ways.

Purdue Researcher Develops 3D Detection System for Self-Driving Vehicles

Purdue University’s Somali Chaterji has developed AGILE3D, a groundbreaking 3D detection system that enhances real-time perception for self-driving vehicles and other autonomous technologies.

A team at Purdue University, led by Indian American researcher Somali Chaterji, has unveiled a revolutionary 3D detection system that could significantly impact the manufacturing of autonomous vehicles, industrial robotics, delivery robots, and drones. This innovative system, known as AGILE3D, is currently patent-pending and is designed to outperform traditional 3D lidar perception pipelines, particularly during resource contention.

“AGILE3D is the first adaptive, contention- and content-aware 3D object detection system specifically tailored for embedded GPUs, or graphics processing units,” explained Chaterji, who serves as an associate professor of agricultural and biological engineering in Purdue’s College of Agriculture and College of Engineering. She also holds a courtesy appointment in the Elmore Family School of Electrical and Computer Engineering.

The AGILE3D system can dynamically adjust its detection strategies based on real-time hardware constraints and varying input data. This adaptability is crucial for applications that require rapid 3D perception while operating within the limited computational resources of onboard systems.

Research findings presented at prestigious conferences, including the Conference on Neural Information Processing Systems (NeurIPS), the European Conference on Computer Systems (EuroSys), and the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), indicate that AGILE3D meets stringent latency objectives. It delivers an accuracy improvement of over 3% compared to adaptive controllers and up to 7% over commonly used static 3D detectors.

Chaterji emphasized the broad applicability of AGILE3D, stating that it is particularly well-suited for autonomous driving, where real-time processing of lidar frames is essential for safety. “Beyond cars, AGILE3D can enhance the performance of delivery robots, drones, industrial and mobile robotics, as well as augmented reality and virtual reality applications,” she noted. “This is especially important in fields like digital agriculture and forestry, where platforms rely on embedded GPUs and require predictable latency for smoother and safer operations.”

As multiple onboard workloads—such as perception, tracking, planning, and in-cabin infotainment—compete for GPU resources, maintaining performance becomes increasingly challenging. Chaterji explained that resource contention arises when these various processes share the same embedded GPU and memory system simultaneously. An example of this is a ride-hailing robotaxi, where camera perception, lidar processing, tracking, mapping, and planning must all function concurrently.

One of the primary limitations of 3D lidar technology is its update rate, which dictates how frequently the sensor can provide a new point cloud frame, essentially a fresh 3D snapshot of the surrounding environment. AGILE3D addresses this challenge by employing two coordinated layers: a multibranch execution framework (MEF) and a contention- and content-aware reinforcement learning (CARL) controller. These components work together to maintain high accuracy even under varying levels of hardware contention and latency budgets ranging from 100 to 500 milliseconds.

Chaterji and her team are continuing to develop AGILE3D to facilitate dense scene understanding on onboard computers, ensuring that 3D semantic segmentation can operate reliably within tight compute and memory constraints. Funding for this project has been provided through Chaterji’s National Science Foundation CAREER grant, as well as a separate NSF grant for their CHORUS center.

Chaterji holds a PhD in Biomedical Engineering from Purdue University, where she has received several accolades, including the Chorafas International Award and the College of Engineering Best Dissertation Award in 2010. She completed her post-doctoral fellowship at the University of Texas at Austin in the Department of Biomedical Engineering and has been a scientific advisor to the IC2 Institute at the University of Texas at Austin since 2014. In 2016, she was honored with Purdue’s Seed-for-Success Award for securing a research grant exceeding $1 million.

The development of AGILE3D marks a significant advancement in the field of autonomous technology, promising to enhance the safety and efficiency of various applications reliant on real-time 3D perception.

According to a media release from Purdue University, the AGILE3D system represents a pivotal step forward in the integration of advanced perception capabilities into autonomous systems.

NASA Finalizes Strategy for Sustaining Human Presence in Space

NASA has finalized its strategy for maintaining a human presence in space, focusing on the transition from the International Space Station to future commercial platforms.

This week, NASA announced the finalization of its strategy aimed at sustaining a human presence in space, particularly in light of the planned de-orbiting of the International Space Station (ISS) in 2030. The new strategy emphasizes the necessity of maintaining the capability for extended stays in orbit after the ISS is retired.

The document, titled “NASA’s Low Earth Orbit Microgravity Strategy,” outlines the agency’s vision for the next generation of continuous human presence in orbit. It aims to facilitate greater economic growth and uphold international partnerships. However, the strategy comes amid uncertainties regarding the readiness of upcoming commercial space stations.

NASA Deputy Administrator Pam Melroy acknowledged the challenges posed by budget constraints, stating, “Just like everybody has to make hard decisions when the budget is tight, we’ve made some choices over the last year to cut back programs or cancel them altogether to ensure that we’re focused on our highest priorities.”

Commercial space company Voyager is among those working on potential replacements for the ISS. Jeffrey Manber, Voyager’s president of international and space stations, expressed support for NASA’s strategy, emphasizing the need for a commitment to reassure investors. “We need that commitment because we have our investors saying, ‘Is the United States committed?’” he noted.

The initiative to maintain a permanent human presence in space dates back to President Reagan, who highlighted the importance of private partnerships in his 1984 State of the Union address. “America has always been greatest when we dared to be great. We can reach for greatness,” he stated, while also warning that the market for space transportation could exceed the nation’s capacity to develop it.

Since the launch of the first piece of the ISS in 1998, the station has hosted over 28 individuals from 23 countries, maintaining continuous human occupation for 24 years. The Trump administration’s national space policy, released in 2020, called for a “continuous human presence in Earth orbit” and emphasized the transition to commercial platforms—a policy that has been upheld by the Biden administration.

NASA Administrator Bill Nelson addressed the potential for extending the ISS’s operational life, stating, “Let’s say we didn’t have commercial stations that are ready to go. Technically, we could keep the space station going, but the idea was to fly it through 2030 and de-orbit it in 2031.”

Recent discussions have raised questions about the meaning of “continuous human presence.” Melroy remarked at the International Astronautical Congress in October that there is still ongoing dialogue about whether this presence constitutes a “continuous heartbeat” or merely a “continuous capability.” She emphasized the importance of understanding this concept, especially in light of concerns from commercial and international partners regarding the potential loss of the ISS without a commercial station ready to take its place.

<p”Almost all of our industry partners agreed. Continuous presence is continuous heartbeat. And so that’s where we stand,” Melroy stated. She further underscored the United States’ leadership in human spaceflight, noting that the only other space station in orbit when the ISS de-orbits will be the Chinese space station. “We want to stay and remain the partner of choice for our industry and for our goals for NASA,” she added.

Three companies, including Voyager, are collaborating with NASA to develop commercial space stations. Axiom signed an agreement with NASA in 2020, while contracts were awarded to Nanoracks, now part of Voyager Space, and Blue Origin in 2021.

Melroy acknowledged the challenges posed by budget caps resulting from agreements between the White House and Congress for fiscal years 2024 and 2025, which have limited investment opportunities. “What we do is we co-invest with our commercial partners to do the development. I think we’re still able to make it happen before the end of 2030, though, to get a commercial space station up and running so that we have a continuous heartbeat of American astronauts on orbit,” she said.

Voyager remains optimistic about its development timeline, with plans to launch its starship space station in 2028. Manber stated, “We’re not asking for more money. We’re going ahead. We’re ready to replace the International Space Station.” He emphasized the importance of maintaining a permanent presence in space, warning that losing it would disrupt the supply chain that supports the burgeoning space economy.

Additional funding has been allocated to the three companies since the initial space station contracts, and a second round of funding could prove crucial for some projects. NASA may also consider funding new space station proposals, such as Long Beach, California’s Vast Space, which recently unveiled concepts for its Haven modules and plans to launch the Haven-1 as soon as next year.

Melroy concluded by stressing the importance of competition in the development of commercial space stations. “This is a development project. It’s challenging. It was hard to build the space station. We’re asking our commercial partners to step up and do this themselves with some help from us. We think it’s really important that we carry as many options going forward to see which one really pans out when we actually get there,” she said.

According to Fox News, NASA’s finalized strategy reflects a commitment to maintaining a human presence in space, while navigating the complexities of budget constraints and commercial partnerships.

X Service Outage Affects Thousands of Users Across the U.S.

Social media platform X experienced a significant outage on March 18, impacting thousands of users across the U.S. before service was restored later in the day.

On March 18, the social media platform X faced a considerable outage that affected thousands of users throughout the United States. According to data from the outage-tracking website Downdetector, the service was restored later in the day.

The disruption was most noticeable during the morning hours, with approximately 34,500 users reporting issues before the situation improved. By 11:39 a.m. Eastern Time, the number of outage reports had decreased to 849 on Downdetector. This website aggregates incident reports from various sources, including user submissions, suggesting that the actual number of affected users may be higher than the reported figures.

Users encountered difficulties accessing essential features of the platform, such as loading posts, refreshing feeds, and receiving notifications. The outage impacted both the mobile and web versions of X, disrupting real-time communication for many users around the globe.

The cause of the outage remains unclear, and X, which is operated by Elon Musk, did not respond to requests for comment from users seeking clarification.

This recent incident underscores the platform’s vulnerability to technical disruptions. X serves as a significant channel for news dissemination, public discourse, and business communication. The rapid increase in outage reports indicates that the problem escalated quickly, following a period of normal activity on the platform.

Such disruptions are not uncommon. X has experienced multiple outages in recent months, affecting users not only in the United States but also in other regions worldwide. These recurring issues raise concerns about the stability of the platform’s infrastructure and its capacity to manage large-scale user demand without interruptions.

Ultimately, this outage serves as a reminder of the crucial role digital platforms play in modern communication and the inherent risks associated with their occasional instability.

According to Downdetector, the service was restored later in the day, but the incident highlights ongoing concerns about the reliability of social media platforms.

Robot Firefighters Deployed to Enter Burning Buildings First

New robotic firefighting vehicles equipped with thermal cameras and water cannons are transforming emergency response by entering burning buildings before human firefighters.

Firefighters often confront significant challenges when responding to major blazes, primarily due to the uncertainty of what lies within a burning structure. Smoke obscures visibility, floors may be unstable, and toxic gases can accumulate rapidly. Even seasoned crews can find themselves entering buildings with limited information about the hazards they may face.

However, a new generation of robotic firefighting vehicles is poised to change this dynamic. These rugged robots can enter dangerous environments first, scanning the scene to locate fires and assess hazards before human firefighters step inside. By providing real-time information, these machines enable crews to make informed decisions, enhancing safety and effectiveness during firefighting operations.

The robotic firefighter is specifically designed for conditions where heat, smoke, and collapsing structures pose significant risks to human responders. Equipped with a powerful water cannon, the vehicle can adjust its output to deliver either a focused stream or a wide spray, depending on the situation. Additionally, thermal cameras allow the robot to see through thick smoke, providing critical visibility in chaotic environments.

One of the standout features of this robotic vehicle is its self-cooling system. The robot can spray a protective curtain of water around itself, preventing overheating even in extreme temperatures that can reach nearly 1,500 degrees Fahrenheit. In such conditions, human firefighters would be unable to operate safely.

Fire scenes are often unpredictable, with debris blocking pathways and visibility rapidly diminishing. To navigate these challenges, the robot is equipped with six independently powered wheels, each with its own motor. This design allows the vehicle to rotate in place and maneuver through tight spaces effectively. It can also climb steep ramps, such as those found in parking garages, and roll over obstacles up to a foot tall. An advanced driving system scans the terrain, guiding the robot around hazards while streaming live video back to firefighters outside the building.

This real-time video feed is invaluable, as it allows crews to see where flames are spreading and where potential survivors may be trapped. Such insights help firefighters formulate a strategic plan before entering the building, significantly enhancing their safety and effectiveness.

Another practical feature of the robotic firefighter addresses a common challenge faced by firefighters during rescues. The robot carries a hose that glows in dark, smoky environments, providing a visible path for rescuers. This glowing hose can be a lifesaver, helping firefighters navigate back to safety when visibility is nearly nonexistent.

The emergence of firefighting robots is part of a broader trend in emergency response, where machines are increasingly taking on tasks that place human lives at risk. Similar technologies are already in use across various fields, including autonomous mining trucks in remote locations and robots that clear landmines in former war zones. The underlying principle is straightforward: allow machines to handle the most dangerous initial moments of a crisis while human responders focus on rescue and strategy.

Engineers are also exploring the potential of artificial intelligence to enhance these robotic systems further. Future iterations may analyze fire size, smoke patterns, and heat levels to assist in firefighting decisions, making these robots even more effective in crisis situations.

The robotic firefighter was developed by Hyundai Motor Group in collaboration with South Korea’s National Fire Agency. Recently, the company donated several of these vehicles to fire stations in South Korea, allowing crews to begin utilizing them in real emergencies. Two robots have already been delivered, with additional units expected soon.

The technology has already undergone its first real-world test during a factory fire in North Chungcheong Province. The push for safer firefighting tools is underscored by alarming statistics; according to the Korea National Fire Agency, 1,788 firefighters have been injured or killed on the job over the past decade. By enabling robots to enter hazardous environments first, the hope is to reduce these numbers significantly.

While most people may not yet see these machines in their neighborhoods, the rapid adoption of firefighting technology suggests that their presence could become more common as departments recognize the benefits. U.S. fire agencies are already employing drones, thermal cameras, and robotics in various rescue scenarios. A robot that can scout a burning building before firefighters enter could soon become an essential tool in their arsenal, providing better information and reducing the risks associated with blind entries into dangerous structures.

For firefighters, this technology offers a critical advantage: enhanced situational awareness when every second counts. Although robots will never replace the human element in firefighting, they can provide invaluable support, ensuring that responders have the best possible information before they commit to entering a burning building.

As the technology continues to evolve, it raises an important question for communities: If your local fire department had access to a robot capable of entering a burning building first, would you support its use? This innovative approach to firefighting could lead to faster rescues and safer emergency responses in the future, ultimately benefiting everyone.

According to Fox News, the integration of robotic technology in firefighting represents a significant advancement in emergency response capabilities.

Orbiter Photos Reveal Lunar Modules from First Two Moon Landings

Recent aerial images from India’s Chandrayaan 2 orbiter reveal the Apollo 11 and Apollo 12 lunar landing modules more than 50 years after their historic missions.

Photos captured by India’s Space Research Organization (ISRO) moon orbiter, Chandrayaan 2, provide a stunning view of the Apollo 11 and Apollo 12 landing sites over half a century after these historic missions. The images, taken in April 2021, were recently shared on the Curiosity page on X, a platform dedicated to space exploration.

“Image of Apollo 11 and 12 taken by India’s Moon orbiter. Disapproving Moon landing deniers,” Curiosity posted, accompanied by the overhead photographs that clearly depict the lunar landing vehicles resting on the moon’s surface.

Apollo 11, which made its historic landing on July 20, 1969, marked a monumental achievement in human space exploration, with astronauts Neil Armstrong and Buzz Aldrin becoming the first men to walk on the lunar surface. Their fellow astronaut, Michael Collins, remained in lunar orbit during their historic excursion.

The lunar module, known as Eagle, was left in lunar orbit after it successfully rendezvoused with the command module, where Collins was stationed. The Eagle eventually returned to the moon’s surface after completing its mission.

Following Apollo 11, Apollo 12 became NASA’s second crewed mission to land on the moon, occurring on November 19, 1969. During this mission, astronauts Charles “Pete” Conrad and Alan Bean made history as the third and fourth men to walk on the lunar surface.

The Apollo program continued until December 1972, culminating in the final mission when astronaut Eugene Cernan became the last person to walk on the moon.

The Chandrayaan-2 mission was launched on July 22, 2019, precisely 50 years after the Apollo 11 mission, and it took two years for the orbiter to capture these remarkable images of the 1969 lunar landers.

In addition to Chandrayaan-2, India also launched Chandrayaan-3 last year, which successfully landed near the moon’s south pole, marking another significant achievement in lunar exploration.

These recent images serve not only as a testament to the enduring legacy of the Apollo missions but also highlight the ongoing advancements in space exploration technology, as nations around the world continue to explore the mysteries of the moon and beyond.

According to Fox News, the images from Chandrayaan 2 reaffirm the historical significance of the Apollo landings and contribute to the ongoing dialogue about space exploration and its impact on humanity.

Indian-American Researchers Develop Tool to Prevent Identity Leaks in AI Photo Editing

Three Indian American researchers from Purdue University have developed a groundbreaking system to safeguard personal identities during AI photo editing by limiting the detection of key attributes.

Three Indian American researchers at Purdue University have created a patent-pending system designed to protect against identity leakage during AI photo editing. This innovative tool reduces the ability of artificial intelligence to detect sensitive attributes such as eye color and facial hair.

The system, developed by Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty, is utilized before and after photos are uploaded to an AI editing platform. According to a media release from the West Lafayette, Indiana-based public research university, this technology aims to assist consumers, businesses, and institutions in editing and sharing profile photos, ID images, and personal pictures without compromising their private identities.

“Results of validation testing show that we can preserve editing quality while dramatically reducing what AI models can learn about your identity,” Aggarwal stated. “This is a critical step toward trustworthy generative AI.” Their research has been published in the peer-reviewed journal IEEE Transactions on Artificial Intelligence.

Aggarwal holds the title of University Faculty Scholar and serves as the Reilly Professor of Industrial Engineering, with additional appointments in the Department of Computer Science and the Elmore Family School of Electrical and Computer Engineering. Both Tamboli, a doctoral alumnus, and Punyamoorty, a doctoral candidate in computer and electrical engineering, have worked in Aggarwal’s research group.

“Our system allows users to mask sensitive regions on their photo, like the face, from an AI editing service,” Tamboli explained. “Those regions are masked locally on the user’s device using a detailed outline of the region.” He added that only the masked image is sent to the AI editing service. “After the image is edited by AI, our system reintegrates the sensitive region back into the edited image using geometric alignment and blending,” he noted.

Aggarwal emphasized that the Purdue system is the first solution to provide full privacy, as sensitive data never leaves the user’s device. This approach not only produces seamless, natural results in the final edited image but is also compatible with any commercial generative AI model, eliminating the need for retraining.

“It’s privacy by design,” Aggarwal said. “With our system, the AI platform never sees the face, but the final edited image still looks completely natural.” The researchers have disclosed their system to the Purdue Innovates Office of Technology Commercialization, which has applied for a patent to protect the intellectual property.

Addressing the privacy risks associated with AI editing tools, Tamboli noted that modern generative AI technologies edit photos with impressive realism but require users to upload full, unaltered images to cloud-based systems. These images often contain private details, including facial features and identifying characteristics.

“Requiring full, unaltered images creates serious privacy and security risks,” he said. “Once a photo is uploaded, users lose control over where their biometric data goes, how it is stored, or how it might be misused.” Tamboli criticized previous privacy approaches that relied on blurring sensitive regions, locking parts of an image, using stylization filters, or avoiding cloud uploads entirely, stating that these methods fail to fully protect personal identity.

The research team validated their system by testing how well leading AI foundation models could infer biometric attributes from masked versus unmasked images. They discovered that the Purdue system significantly reduced the ability of AI models to detect attributes such as eye color, facial hair, and age group. In some instances, the accuracy of attribute classification dropped by more than 80%, demonstrating robust protection against identity leakage.

The researchers are actively working to bring this technology closer to real-world deployment, with plans to expand the system’s capabilities to protect additional sensitive features, including medical details, ID documents, and other privacy-critical content.

This innovative development highlights the ongoing efforts of researchers to address privacy concerns in the rapidly evolving landscape of AI technology, ensuring that personal identities remain secure in the digital age.

According to The American Bazaar, the Purdue Innovates Office of Technology Commercialization is committed to advancing this technology for broader application.

The Email Technique That Uncovers Hidden Online Accounts

Searching your email inbox for old sign-up messages can help you uncover forgotten online accounts and reduce your digital footprint.

In today’s digital landscape, many individuals find themselves with a multitude of online accounts, often far more than they can remember. From shopping sites and travel apps to rewards programs and forums, the ease of signing up for services can lead to a cluttered digital existence.

These forgotten accounts can pose risks, as they contribute to a larger digital footprint and may expose personal information if a company experiences a data breach. Fortunately, there is a straightforward method to uncover these accounts using a tool that most people already have at their disposal: their email inbox.

When you create an account on a website, it typically sends a confirmation email. This means your inbox serves as a timeline of every service you have joined. Instead of racking your brain to remember all the sites you signed up for, you can simply search your email for clues.

To begin, open your email account and utilize the search bar. Enter phrases commonly found in sign-up emails, such as “welcome,” “confirm your account,” or “thank you for registering.” These keywords often yield a treasure trove of account confirmations, revealing services you may have forgotten about.

As you sift through the results, take note of the companies sending these messages. Many users are surprised to discover accounts they haven’t thought about in years. It’s not uncommon for the list to grow quickly once you start searching.

After identifying these accounts, compile a short list of those you no longer use. Even a brief search can uncover a surprising number of accounts, effectively creating a cleanup checklist for you.

Once you have your list, visit the official website of each service directly—avoid clicking on links in old emails for security reasons. Look for account settings or options to delete your account. If you cannot find the option to remove your account, consider reaching out to the company’s support team for assistance.

While it may take some time, deleting unused accounts significantly reduces the number of platforms storing your personal information. This proactive approach is essential for maintaining your online privacy.

In addition to the initial search, consider conducting another round using phrases like “unsubscribe” or “account settings.” These terms often indicate that you have created an account with the respective company. Many users are astonished by the number of services that appear during this search.

Closing old accounts not only helps mitigate risks but also reduces the chances of your personal information being compromised. However, it’s important to note that your data might still exist elsewhere on the internet. Data broker companies frequently collect personal details from various sources, including apps, websites, and public records. They create profiles that may include your address, phone number, browsing habits, and more.

After removing unused accounts, many individuals opt to use data removal services that request the deletion of their listings from these data brokers. This combination can significantly decrease the amount of personal information available online.

For those interested in exploring data removal services, resources are available to help you assess whether your personal information is already exposed on the web. A quick scan can provide insights into your online presence and help you take necessary precautions.

Digital clutter accumulates quietly over time, with each sign-up adding another account linked to your email address. The good news is that your inbox holds the key to uncovering many of these forgotten accounts. A few simple searches can reveal long-dormant accounts that have been lingering online for years.

Cleaning up these accounts requires some effort, but the benefits are substantial. Fewer accounts mean fewer places where your personal information can leak or be exposed. It’s worth considering how many companies may still possess your personal information without your knowledge.

For more tips on managing your online security and privacy, consider subscribing to newsletters that offer insights and alerts on urgent security matters.

According to CyberGuy.com, taking proactive steps to manage your online accounts can significantly enhance your digital security.

Newly Discovered Asteroid Identified as Tesla Roadster in Space

Astronomers recently identified a Tesla Roadster, launched into orbit by SpaceX in 2018, as an asteroid, leading to a swift retraction of the discovery.

Astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics in Massachusetts mistakenly classified a Tesla Roadster, launched into orbit by SpaceX in 2018, as an asteroid earlier this month. This confusion arose shortly after the object was registered as 2018 CN41, which was promptly deleted on January 3 when it was confirmed to be Musk’s roadster.

The Minor Planet Center clarified on its website that the registry for 2018 CN41 was removed after it was determined that the orbit of the object matched that of an artificial satellite, specifically the Falcon Heavy Upper Stage carrying the Tesla Roadster. The center stated, “The designation 2018 CN41 is being deleted and will be listed as omitted.”

The Tesla Roadster was launched during the maiden flight of SpaceX’s Falcon Heavy rocket in February 2018. Initially, the vehicle was expected to enter an elliptical orbit around the sun, extending slightly beyond Mars before returning toward Earth. However, it appears to have exceeded the orbit of Mars and continued its trajectory toward the asteroid belt, as noted by Musk at the time.

When the roadster was misidentified as an asteroid, it was located less than 150,000 miles from Earth, which is closer than the moon’s orbit. This proximity raised concerns among astronomers about the need to monitor the object’s path and its potential closeness to Earth.

Jonathan McDowell, an astrophysicist at the Center for Astrophysics, commented on the incident, highlighting the challenges posed by untracked objects in space. He remarked, “Worst case, you spend a billion launching a space probe to study an asteroid and only realize it’s not an asteroid when you get there,” emphasizing the importance of accurate tracking and identification of celestial bodies.

As the situation unfolded, Fox News Digital reached out to SpaceX for further comment regarding the misidentification of the Tesla Roadster.

This incident serves as a reminder of the complexities involved in space exploration and the ongoing need for precise monitoring of objects in orbit, whether they are natural or man-made.

According to Astronomy Magazine, the mix-up underscores the challenges faced by astronomers in distinguishing between asteroids and artificial objects, particularly as the number of satellites and other debris in space continues to grow.

CarGurus Data Breach Exposes 12.4 Million Records Linked to ShinyHunters

CarGurus users are at risk after the ShinyHunters hacking group leaked 12.4 million records, including sensitive personal and financial information.

CarGurus users are facing significant security risks following a data breach linked to the ShinyHunters hacking group, which has allegedly leaked 12.4 million records. This incident raises concerns about the safety of personal information for millions of individuals who utilize the popular auto shopping platform each month.

The leaked data reportedly includes a variety of sensitive information, such as names, phone numbers, email addresses, physical addresses, and finance pre-qualification details. While a majority of the records had been exposed in previous incidents, approximately 3.7 million records are newly added, making this data particularly concerning for users.

The ShinyHunters group published a 6.1GB file on February 21, claiming it contained user records from CarGurus, which operates not only in the United States but also in Canada and the United Kingdom. The platform attracts around 40 million visitors monthly, allowing users to compare vehicles, contact sellers, and apply for financing.

According to Have I Been Pwned, a website that tracks data breaches, the exposed information encompasses email addresses, IP addresses, full names, phone numbers, physical addresses, account IDs, dealer details, subscription information, and finance pre-qualification application data, along with their outcomes. Notably, about 70% of the data had previously appeared in other breaches, while the remaining 3.7 million records are new.

As of now, CarGurus has not issued an official statement confirming the breach and has not responded to media inquiries regarding the incident. ShinyHunters is notorious for leaking company data when ransom negotiations fail and has recently targeted major brands across various sectors, including telecom, retail, finance, and technology.

The group typically gains access to sensitive data through social engineering tactics rather than directly breaching firewalls. In past incidents, they have used phone calls or fake login pages to trick employees into providing credentials. Once inside, attackers can quietly access cloud systems that house customer data. In some cases, they have even convinced employees to install malicious applications that grant access to customer databases without triggering alarms.

If the dataset is legitimate, criminals now have access to detailed personal profiles linked to car shopping and financing activities, which can be highly valuable. The finance pre-qualification data is particularly sensitive, as it indicates that individuals were sharing financial details, making them prime targets for scams, identity theft attempts, and fraudulent loan offers.

A spokesperson for CarGurus acknowledged a cybersecurity incident, stating, “We promptly responded by securing the affected environment, and we are currently working with a leading cybersecurity firm to investigate. Based on the investigation to date, we believe the activity has been contained and limited in scope. Also, at this time, there are no indications that dealer data feeds, APIs, or core systems or products used by our consumers or dealer partners have been compromised. We remain fully operational, and our services continue without interruption. We will notify any affected individuals in accordance with applicable laws.”

In light of this breach, users are advised to take immediate steps to mitigate their risk. One recommended action is to check if your email address has been affected by visiting Have I Been Pwned. Users can enter their email address to determine if their information appears in the CarGurus leak.

It is also essential to secure important accounts, such as email, medical, and banking, by using strong, unique passwords that combine letters, numbers, and symbols. Avoid predictable choices like names or birthdays, and never reuse passwords across multiple accounts. A password manager can simplify this process by securely storing complex passwords and generating new ones as needed.

Additionally, consider utilizing a personal data removal service. While no service can guarantee complete removal of personal data from the internet, these services actively monitor and erase personal information from various websites, reducing the risk of scammers cross-referencing data from breaches with information available on the dark web.

If CarGurus or your email provider offers two-factor authentication (2FA), enabling it adds an extra layer of security, making it more challenging for unauthorized individuals to access your accounts even if they have your password.

Users should exercise caution with emails or texts related to car loans, financing approvals, or dealership follow-ups. It is advisable not to click on links in unsolicited messages and instead contact the company directly using official contact details found on their website. Strong antivirus software can also help block malicious links and downloads that may accompany phishing campaigns.

For those who applied for financing, monitoring credit reports for unfamiliar inquiries or new accounts is crucial. Early detection can help prevent identity theft from escalating. If suspicious activity is detected, consider placing a credit freeze to safeguard personal information.

Identity theft protection services can also monitor unusual activity linked to your name, Social Security number, or financial accounts, alerting you promptly if someone attempts to open a new credit card in your name.

This incident underscores a broader issue concerning the security of personal and financial data collected by companies. If the leaked dataset is authentic, millions of individuals who were simply shopping for a car now face an increased risk of scams. CarGurus has yet to publicly confirm the breach, leaving customers in a state of uncertainty regarding the potential exposure of their sensitive financial application data.

As discussions around data security continue, the question arises: should companies that collect financing data be required to publicly confirm or deny breaches within a specific timeframe? This incident highlights the need for transparency in the handling of sensitive information.

For further information and tips on protecting your data, visit CyberGuy.

Indian-American IIT Graduate Devendra Chaplot to Assist Musk in Superintelligence Development

Indian American AI researcher Devendra Chaplot has joined Elon Musk’s xAI and SpaceX to collaborate on developing advanced artificial intelligence systems, aiming to create what he calls “superintelligence.”

Devendra Singh Chaplot, an Indian American AI researcher, has joined Elon Musk’s xAI and SpaceX, where he is working closely with Musk and his teams to develop what he describes as “superintelligence.”

A graduate of the Indian Institute of Technology (IIT) Bombay, Chaplot is set to collaborate intimately with the teams at SpaceX and xAI on advanced artificial intelligence systems. He believes that the partnership between these two companies presents a unique opportunity to merge physical and digital intelligence.

Chaplot emphasizes that the high engineering culture and substantial resources available at both SpaceX and xAI could facilitate significant breakthroughs in the creation of advanced AI technologies. He expressed his enthusiasm on social media, stating, “Together SpaceX and xAI combine physical and digital intelligence under a leader who understands hardware at the deepest level. Add a high-agency culture with frontier-scale resources, and you get the possibility to achieve something truly unique.”

In his announcement, Chaplot reflected on his journey in the field of artificial intelligence, saying, “I’m excited to advance the fields I’ve obsessed over for years, from robotics research to building AI models on the founding teams of Mistral and TML. Both were extraordinary journeys with extraordinary people that shaped how I think about building intelligence from the ground up.”

Chaplot expressed gratitude for the experiences that led him to this point, adding, “Grateful for everything that brought me here and can’t wait to get started.”

He holds a Bachelor of Technology (BTech) degree in Computer Science and Engineering, along with a minor in Applied Statistics from IIT Bombay. Chaplot later earned a PhD in machine learning from Carnegie Mellon University, a renowned institution in the field of artificial intelligence, where he focused on building intelligent autonomous navigation agents.

Throughout his career, Chaplot has worked at the intersection of machine learning, robotics, and computer vision. His contributions include the development of smart systems capable of perceiving and interacting with their environments.

Prior to joining xAI and SpaceX, Chaplot was part of the founding team at Thinking Machines Lab, where he worked on research and product development, including the creation of Tinker, a training API that enables users to train large language models (LLMs).

Before that, he was a founding member of Mistral AI, where he contributed to the training of several models, including Mistral 7B, Mixtral 8x7B, and Mistral Large. He also led the multimodal research team responsible for training Pixtral 12B and Pixtral Large, and established the Mistral U.S. office in Palo Alto.

Earlier in his career, Chaplot served as a research scientist at Facebook AI Research, where he focused on the convergence of computer vision and robotics.

As Chaplot embarks on this new chapter with Musk’s teams, the AI community is keenly watching for the innovations that may emerge from this collaboration, which aims to push the boundaries of artificial intelligence.

According to The American Bazaar, Chaplot’s expertise and experience position him as a significant contributor to the ambitious goals of xAI and SpaceX.

Data Brokers Allegedly Conceal Opt-Out Pages from Google Users

Major data brokers have been accused of obscuring opt-out pages from search engines, complicating consumers’ efforts to stop the sale of their personal information, according to a recent Senate investigation.

A recent investigation by the U.S. Senate has revealed that several prominent data brokers allegedly concealed their opt-out pages from search engines, making it increasingly difficult for consumers to prevent the sale of their personal information.

For anyone who has attempted to opt out of a data broker’s services, the experience can be frustrating. Users often find themselves navigating through layers of legal jargon and complex web pages, leading to the unsettling question: Do these companies even want you to find the exit? The Senate’s findings suggest that the answer is a resounding no.

The investigation uncovered that major data brokers implemented coding on their opt-out pages that effectively blocked search engines from indexing them. This means that consumers could not easily locate the pages necessary to request the cessation of their data sales.

Following pressure from Senator Maggie Hassan, four companies have since removed the obstructive code from their sites. The firms implicated in the report are known for collecting and selling personal information for various purposes, including marketing, analytics, and identity verification. The types of data they handle can range from browsing habits and device details to location history and sensitive identifiers.

Earlier investigations conducted by The Markup and CalMatters had already indicated that numerous data brokers employed “no index” code to obscure opt-out instructions from Google search results. While some companies removed the code after being contacted by reporters, Senator Hassan’s office later confirmed that the four companies in question still had their opt-out pages hidden from search engines. They have now taken steps to rectify this issue.

However, one company, Findem, has not yet removed the “no index” code from its “Do not sell or share my personal information” page. In response, Findem stated that an email from the senator’s office did not reach its CEO due to spam filtering, but assured that its privacy channels are actively monitored. The Senate Committee’s report highlighted this lack of action as a significant concern regarding the responsiveness to privacy requests and the accessibility of opt-out rights.

In a statement, a spokesperson for 6sense emphasized their commitment to privacy transparency, noting that their Privacy Center, where individuals can exercise their opt-out rights, has always been fully indexed. They acknowledged that a “no index” directive was previously included on their Privacy Policy page to mitigate spam but confirmed that it was removed immediately after the issue was raised by the Committee.

Opt-out pages are not merely a courtesy; in many states, they are mandated by law. When companies obscure these pages from search engines, they create barriers that hinder consumers from taking control of their personal information. This is particularly concerning given the financial repercussions of data broker breaches, which have cost U.S. consumers over $20 billion due to identity theft linked to four major data broker incidents.

The implications of these breaches extend beyond privacy concerns; they pose significant risks to consumer protection. Criminal networks can exploit personal data such as Social Security numbers and home addresses to craft convincing scams, making the issue of data broker breaches a pressing consumer protection matter.

Senator Hassan’s investigation is part of a broader initiative to combat scams, which now account for nearly half a trillion dollars in losses annually and have evolved into one of the largest illicit industries worldwide. She has also initiated inquiries into the roles of satellite internet providers, online dating platforms, AI companies, and federal agencies in preventing fraud.

The uncomfortable reality is that your personal data likely resides in numerous databases you may not even be aware of. You did not consent to this; your information is traded within a vast marketplace. Even when opt-out forms are available, the process can feel overwhelming and time-consuming. With the absence of a comprehensive federal privacy law similar to the European GDPR, regulations vary significantly from state to state.

While the recent changes have made opt-out pages easier to locate, the overarching system remains largely unchanged. Completely erasing your presence from the internet is not feasible overnight, but there are steps you can take to minimize your exposure.

One effective method is to search your full name and city on Google to identify data broker listings, many of which contain opt-out links hidden within their privacy policies. California residents can utilize a free state-run tool called DROP at privacy.ca.gov/drop/ to request deletion from over 500 registered brokers, with other states beginning to implement similar systems.

Additionally, visiting the privacy or “Do not sell my information” pages on broker sites and carefully following the provided instructions can help you take control of your data. Keeping track of confirmation emails is also crucial.

For those seeking a more automated approach, data removal services can streamline opt-out requests across various brokers. While these services may not be perfect, they can save significant time. You can also explore expert-reviewed password managers and enable two-factor authentication (2FA) for financial and social accounts to enhance your security.

The data broker industry operates legally and transparently, yet many individuals remain unaware of the extent to which their information is traded. Until Congress enacts a national privacy law, oversight will continue to be fragmented, leaving consumers to navigate the complexities of data management on their own.

This situation transcends the issue of hidden code; it is fundamentally about control. When companies obscure opt-out pages from search engines, they create an uneven playing field. Although recent scrutiny has made these pages more accessible, the broader ecosystem remains designed to profit from personal data.

The pressing question is not merely whether opt-out pages are now visible on Google, but rather how much of your personal life you are comfortable entrusting to companies you may never have heard of. For further insights and assistance, visit CyberGuy.com.

Remote Robot Surgery Successfully Treats Cancer 1,500 Miles Away

U.K. surgeons have successfully performed remote robot-assisted surgery to remove prostate cancer from a patient located 1,500 miles away, marking a significant milestone in telesurgery.

Surgeons in the United Kingdom have achieved a groundbreaking milestone in medical technology by successfully conducting remote robot-assisted surgery to remove prostate cancer from a patient located 1,500 miles away. This pioneering operation, carried out at The London Clinic, represents the first instance of robot-assisted telesurgery in the U.K.

Traditionally, patients requiring specialized cancer surgery must travel to see a specialist. In this case, however, the specialist traveled to the patient. The procedure took place at St. Bernard’s Hospital in Gibraltar, where the patient remained in the operating room while Professor Prokar Dasgupta operated the robotic system from a control console at The London Clinic’s robotic center on Harley Street in London.

The advanced surgical robot used for this procedure is the Toumai robotic surgical system, developed by MicroPort MedBot. This platform is specifically designed for high-precision, minimally invasive surgeries. The operation was made possible through a secure fiber optic network that transmitted the surgeon’s movements to the robot in Gibraltar, with a latency of just 48 milliseconds—fast enough to create an almost real-time experience.

During the procedure, local urological surgeons James Allen and Paul Hughes were on standby in Gibraltar, ready to intervene if any complications arose or if the connection was interrupted. Fortunately, the operation proceeded without any issues.

The patient, 62-year-old Paul Buxton, has been a resident of Gibraltar for approximately four decades. He had initially planned to travel to London for his surgery, but was offered the opportunity to participate in a telesurgery trial earlier this year. This innovative approach allowed him to undergo the procedure in his local hospital, significantly reducing the disruption to his life. Reports indicate that he felt fantastic just days after the surgery.

The development of remote robotic surgery has been a long time in the making, with early examples dating back to the Lindbergh Operation, where surgeons in New York performed a gallbladder removal on a patient in Strasbourg, France. Since then, technology has advanced significantly, with cross-continental robotic surgeries being conducted between cities such as Rome and Beijing, as well as long-distance prostate operations in parts of Africa.

The successful procedure at The London Clinic signifies a shift in the landscape of remote robotic surgery, moving from experimental demonstrations to practical medical applications. To further showcase this technology, the hospitals plan to live-stream a telesurgery procedure to thousands of surgeons at the upcoming European Association of Urology Congress.

Several key technologies work in tandem to make remote surgery feasible. Surgeons need to see and react instantly during operations, as even minor delays can complicate precise movements. Modern fiber optic networks, along with backup 5G connections, help maintain extremely low latency. Robotic surgical systems translate a surgeon’s hand movements into smaller, more stable actions inside the patient’s body, which can enhance outcomes in delicate procedures like prostate cancer removal. High-definition 3D cameras provide surgeons with exceptional clarity, often surpassing the visibility offered by traditional open surgery.

Despite these advancements, remote robotic surgery still faces significant challenges. Infrastructure remains a critical issue, as hospitals must ensure that their networks are highly reliable with minimal downtime. The costs associated with robotic surgical systems and specialized networks can also be prohibitive, often running into millions of dollars. Additionally, regulatory concerns arise when surgeons operate across borders, introducing complexities related to legal and licensing requirements.

Every remote procedure necessitates contingency plans, with local surgical teams prepared to step in if technology fails. For now, hospitals view telesurgery as an emerging capability rather than a routine practice.

The long-term implications for patients could be profound. In the future, individuals may not need to travel to major medical centers for complex procedures. Instead, specialists could operate remotely, allowing patients to remain in hospitals closer to home. This evolution could particularly benefit those in rural areas or regions with limited access to specialized care, potentially reducing wait times for certain procedures.

Safety remains the paramount concern in this transition. Hospitals must demonstrate that remote procedures are as reliable as traditional surgeries before the technology can become widespread. The successful connection between London and Gibraltar illustrates the rapid advancements in surgical technology, with reliable networks and sophisticated robots enabling surgeons to guide delicate procedures from thousands of miles away.

While remote surgery may not become commonplace overnight, the trajectory is clear. As technology continues to improve, distance may no longer be a barrier to accessing world-class surgical care.

For further insights on this topic, please refer to Fox News.

Private Lunar Lander Blue Ghost Successfully Lands on Moon for NASA

A private lunar lander, Blue Ghost, successfully landed on the moon on Sunday, delivering equipment for NASA and marking a significant milestone for commercial space exploration.

A private lunar lander carrying equipment for NASA successfully touched down on the moon on Sunday, with Mission Control confirming the landing from Texas. This achievement highlights the growing involvement of private companies in lunar exploration as they prepare for future astronaut missions.

Firefly Aerospace’s Blue Ghost lander made its descent from lunar orbit on autopilot, targeting the slopes of an ancient volcanic dome located in an impact basin on the moon’s northeastern edge. The company’s Mission Control, situated outside Austin, Texas, celebrated the successful landing.

“You all stuck the landing. We’re on the moon,” said Will Coogan, chief engineer for the lander at Firefly Aerospace.

This upright and stable landing makes Firefly the first private company to successfully place a spacecraft on the moon without crashing or tipping over. Historically, only five countries—Russia, the United States, China, India, and Japan—have accomplished this feat, with some government missions having failed in the past.

The Blue Ghost lander, named after a rare species of firefly found in the U.S., stands 6 feet 6 inches tall and spans 11 feet wide, providing enhanced stability during its lunar operations.

Approximately half an hour after landing, Blue Ghost began transmitting images from the lunar surface. The first photo sent back was a selfie, albeit somewhat obscured by the sun’s glare.

In addition to Blue Ghost, two other companies are preparing to launch their lunar landers, with the next mission expected to join Blue Ghost on the moon later this week.

This successful landing marks a significant step forward in the commercial space sector, as private companies continue to explore opportunities on Earth’s natural satellite.

According to The Associated Press, the advancements in lunar exploration by private entities could pave the way for more ambitious missions in the future.

Donny Osmond Utilizes AI Technology to Duet with His Younger Self

Donny Osmond’s Las Vegas residency features a groundbreaking digital duet with his 14-year-old self, showcasing the intersection of nostalgia and modern technology in entertainment.

Donny Osmond has long been a figure of evolution in the entertainment industry, and his latest venture in Las Vegas exemplifies this spirit. During his residency at Harrah’s, the legendary performer engages audiences with a digital duet featuring a virtual version of his 14-year-old self, the same teenage sensation who won hearts with hits like “Puppy Love.” This innovative performance not only captivates but also reflects Osmond’s willingness to embrace technology as a means of reinterpreting his storied career.

Osmond’s ability to connect with multiple generations is a testament to his enduring appeal. Older fans remember him as the teen idol who burst onto the scene, while others know him from his iconic variety show with sister Marie. Theater enthusiasts recall his role in “Joseph and the Amazing Technicolor Dreamcoat,” and younger audiences recognize him as the voice of Captain Shang in Disney’s “Mulan.” Additionally, reality TV fans may remember his appearances on “Dancing With the Stars” and “The Masked Singer.” This diverse portfolio allows Osmond to transcend eras, and he embraces this multifaceted identity rather than shying away from it.

In a recent conversation for the “Beyond Connected” podcast, Osmond shared insights into the technology behind his performance. The concept of singing alongside a digital version of himself has been a long-held dream. “Even when I was a teenager, I thought someday there’s going to be technology where John Wayne could be Obi-Wan Kenobi. And I was right,” he remarked, reflecting on his fascination with the possibilities of future technology.

Osmond’s curiosity led him to ponder, “Why can’t I sing ‘Puppy Love’ with my 14-year-old self on stage?” The answer involved a blend of advanced digital production techniques, AI modeling, and innovative stage design. He explained, “The face is actually my 14-year-old face taken from pictures, the voice is my voice from interviews when I was 14, and the body is my 14-year-old grandson.” This combination creates a stunning illusion where both versions of Osmond appear to share the stage simultaneously.

Contrary to popular belief, the younger Osmond is not a hologram. “It’s not a projection, like a laser projection. It’s not like a hologram. It’s a totally different technology,” he clarified. The illusion relies on a hollow box technology integrated into the stage set, designed to resemble a vintage recording booth. Inside, advanced visual systems merge CGI, AI modeling, and stage lighting to produce a full-size, three-dimensional image of the younger Osmond, animated by his grandson’s movements. This setup allows Osmond to interact with his younger self in real time, creating a captivating experience for the audience.

Even after performing this sequence night after night, Osmond finds the experience exhilarating. “I do it every night, and it never gets old. It’s like looking in the mirror 54 years ago,” he said. For longtime fans, this moment serves as a bridge between the youthful star they once adored and the seasoned performer he has become, illustrating a career that spans generations.

Osmond’s enthusiasm for technology is evident in his approach to his performances. “Ever since I was a teenager, I’ve always been kind of a geek or nerd about technical things,” he admitted. This passion drives him to explore new tools and methods to keep his show fresh and engaging. He even revealed a surprising hobby: “I’d have to say, uh, Google Sheets because, uh, I’ve created algorithms.” His interest in data analysis and technology extends beyond the stage, as he employs smart home systems to monitor his properties and ensure security.

As discussions around artificial intelligence continue to evolve in the entertainment industry, Osmond maintains a balanced perspective. “Any technology put in the wrong hands can turn into nefarious things, but look at the good it can do,” he stated. He believes that AI has the potential to drive significant advancements across various fields, including medicine and entertainment. “What a great time to be alive with today’s technology. It’s amazing to watch it all happen in real time,” he added, emphasizing the importance of staying engaged with technological progress.

Osmond also shared an intriguing anecdote about his music’s reach beyond Earth. He mentioned that one of his songs, “Start Again,” was reportedly used to test the sound system on a spacecraft capsule. “They actually used my song to test the sound system on one of the capsules,” he said, adding that his voice may even be sitting on the moon, as he contributed background vocals to a song that was taken there during the Apollo missions.

Reflecting on how digital platforms might have transformed his early career, Osmond mused, “Can you imagine what I could have done during the ‘Puppy Love’ years with social media?” He noted that the connections fans once sought in person are now often facilitated through social media and digital communities, illustrating how technology has reshaped the entertainment landscape.

Osmond’s career began with his brothers as part of the Osmonds, a family group that became a television sensation in the late 1960s and early 1970s. He later gained fame alongside his sister Marie in their hit variety series “Donny & Marie.” Today, he continues to headline his own residency at Harrah’s Las Vegas, with performances extended through May 2026, reflecting his ongoing popularity.

To keep fans engaged, Osmond has developed the Donny app, which consolidates news, videos, tour updates, and a timeline of his career. Fans can also access tickets and show information through his official website, Donny.com. By blending nostalgia with modern technology, Osmond remains connected to fans across generations while pushing the boundaries of his performances.

Donny Osmond’s journey illustrates how curiosity and adaptability can propel an artist forward. Rather than resisting change, he continues to explore the technologies shaping today’s world, from AI-enhanced performances to data-driven applications and smart home systems. His enthusiasm for innovation mirrors the passion he brings to his craft, making him a unique figure in the entertainment industry. For more insights into his experiences and thoughts on technology, be sure to listen to the “Beyond Connected” conversation with Donny Osmond.

For those curious about their own digital habits, a quick quiz is available at Cyberguy.com to assess device and data protection.

According to CyberGuy, Donny Osmond’s career exemplifies the power of curiosity and innovation in the ever-evolving landscape of entertainment.

Athena Lunar Lander Reaches Moon; Condition Still Uncertain

Athena lunar lander successfully reached the moon, but mission controllers remain uncertain about its condition and landing site.

Mission controllers have confirmed that the Athena lunar lander successfully touched down on the moon, but the status of the spacecraft remains unclear. The landing occurred earlier on Thursday, yet officials have not been able to ascertain the condition of the lander or the precise location of its touchdown, according to a report from the Associated Press.

Athena, owned by Intuitive Machines, was equipped with an ice drill, a drone, and two rovers. While the lander has reportedly been able to communicate with its controllers, the details of its condition are still being evaluated. Tim Crain, mission director and co-founder of Intuitive Machines, was heard instructing his team to “keep working on the problem,” despite receiving apparent “acknowledgments” from the spacecraft in Texas.

The uncertainty surrounding Athena’s status follows a challenging history for Intuitive Machines. Last year, their Odysseus lander reached the moon but landed sideways, which added pressure to the current mission. Athena’s landing marks a significant milestone, as it is the second lunar craft to land this week, following Firefly Aerospace’s Blue Ghost, which successfully touched down on Sunday.

Firefly’s chief engineer, Will Coogan, celebrated the achievement, stating, “You all stuck the landing. We’re on the moon.” The successful landing of Blue Ghost has made Firefly Aerospace the first private company to place a spacecraft on the moon without it crashing or landing in an unstable position.

As the situation develops, NASA and Intuitive Machines concluded their online live stream and announced plans to hold a news conference later on Thursday to provide updates on Athena’s status.

According to the Associated Press, the outcome of this mission is being closely monitored as the space community awaits further information about the lander’s condition and operational capabilities.

Transfer Photos from Your Phone to a Hard Drive Easily

Learn how to transfer photos from your smartphone to a hard drive, freeing up space and avoiding costly cloud storage fees while maintaining access to your images.

For many smartphone users, the moment inevitably arrives when a notification alerts them that their device storage is nearly full. This often leads to a frantic search for ways to free up space, including deleting emails, clearing messages, and removing apps.

Many find themselves in this predicament due to automatic backups to services like Google Photos or iCloud, which offer limited free storage. Once that space is filled, users typically face a common dilemma: pay for additional storage or find an alternative solution.

Janice from Alabama recently reached out about her struggle with this issue, a situation that millions of smartphone users encounter annually. Fortunately, there is a viable option: transferring photos to a hard drive that you own. This method not only allows you to keep your images accessible but also helps you avoid ongoing subscription fees.

The simplest way to transfer your photos is to first copy them to a computer. From there, you can easily move them to an external hard drive. The process varies slightly depending on whether you are using an Apple or Android device.

For Apple users, the process involves importing photos through the Photos app on your computer rather than treating the phone as a storage device. If you are signed into iCloud and have iCloud Photos enabled on your iPhone, your photos may already be syncing automatically. In this case, you can access and download them directly from the Photos app on your Mac or through iCloud Photos in a web browser.

Once your photos are on your computer, create a backup by pasting the files into a designated folder. This step ensures you have a complete backup before transferring them to your hard drive. For Windows users, the process is straightforward, as Windows will copy your photos directly to your computer.

After your photos are safely stored on your computer, transferring them to an external hard drive is a quick task. External drives can accommodate tens of thousands of photos, depending on their capacity. For recommendations on the best external drives, visit Cyberguy.com.

If you prefer to skip the computer altogether, some flash drives can connect directly to smartphones. These drives typically come with a companion app that facilitates the transfer of photos from your phone to the drive. This option is particularly useful for those needing to free up space quickly. Check out our best flash drive recommendations at Cyberguy.com for more information.

After transferring your photos to a hard drive, take some time to organize them into folders. While hard drives are generally reliable, maintaining a second backup is advisable to protect your memories in case one drive fails.

Although cloud storage may seem inexpensive initially, the monthly fees can accumulate over time. In contrast, an external hard drive often costs less than a year or two of cloud storage fees. Once purchased, the storage is essentially free, and you retain full control over your photos rather than relying solely on a company’s server.

Janice’s inquiry reflects a common concern: do we really need to continue paying companies to store our own memories? The answer is no. With a simple cable and an affordable hard drive, you can free up space on your phone, keep every photo you want, and avoid ongoing storage fees. Once you familiarize yourself with the process, it becomes quick and routine.

Consider this: if your phone holds years of photos and videos, should those memories reside solely on a company’s cloud server, or should they be stored somewhere you fully control? For more tips and to share your thoughts, visit us at Cyberguy.com.

According to CyberGuy.com, taking control of your digital memories is not only feasible but also beneficial in the long run.

ISS Crew Member Plays Prank as SpaceX Team Arrives

Russian cosmonaut Ivan Vagner welcomed the Crew-10 astronauts to the International Space Station with a humorous twist, donning an alien mask during their arrival on March 16, 2025.

In a lighthearted moment aboard the International Space Station (ISS), Russian cosmonaut Ivan Vagner greeted the Crew-10 astronauts with a playful twist. As the SpaceX Crew Dragon capsule successfully docked at 12:04 a.m. EDT on March 16, 2025, Vagner welcomed the newcomers while wearing an alien mask, showcasing that even astronauts have a sense of humor.

The Crew-10 mission launched from NASA’s Kennedy Space Center in Florida at 7:03 p.m. on Friday, March 14, and arrived at the ISS approximately 29 hours later. As the ISS crew prepared the capsule for deboarding, Vagner was seen floating around in his alien disguise, complete with a hoodie, pants, and socks, creating a memorable and amusing atmosphere for the new arrivals.

NASA astronauts Anne McClain and Nichole Ayers, JAXA (Japan Aerospace Exploration Agency) astronaut Takuya Onishi, and Roscosmos cosmonaut Kirill Peskov entered the ISS shortly after the hatches between the space station and the SpaceX Dragon spacecraft were opened at 1:35 a.m. EDT. This moment was marked by the ringing of the ship’s bell, a tradition that signifies the arrival of new crew members.

Following the hatch opening, the Crew-10 astronauts floated into the station, where they were greeted with handshakes and hugs from the Expedition 72 crew, including Vagner. “It was a wonderful day. Great to see our friends arrive,” said Suni Williams, who was among those welcoming the newcomers.

Williams and fellow astronaut Butch Wilmore are expected to guide the new arrivals through the operations of the space station before they prepare to return home after a nine-month mission. Initially, their mission was scheduled to last only one week following the launch of Boeing’s first astronaut flight. However, complications led to a delay, forcing NASA to bring the Boeing Starliner back to Earth without a crew.

As part of the ongoing operations aboard the ISS, Crew-9 commander Nick Hague and Russian cosmonaut Aleksandr Gorbunov are scheduled to depart the station on Wednesday, March 19, at approximately 4 a.m. EDT, before splashing down off the coast of Florida.

This playful encounter highlights the camaraderie and lighthearted spirit that exists among astronauts, even in the challenging environment of space. Such moments not only provide entertainment but also strengthen the bonds between international crew members working together in orbit.

According to Fox News, the Crew-10 mission continues to exemplify the collaborative efforts of space agencies around the world as they explore the final frontier.

Condé Nast Technology Leader Sanjay Bhakta Joins Flatiron Software Board

Sanjay Bhakta, a prominent Indian American technology executive, has joined the board of Flatiron Software to guide the company’s strategic growth in software engineering and artificial intelligence.

Sanjay Bhakta, the Chief Product and Technology Officer at Condé Nast, has been appointed to the board of Flatiron Software. His role will focus on shaping the strategic growth of the software engineering and AI company.

Flatiron Software, based in Miami, Florida, is known for its ability to deliver on promises that larger firms often fail to fulfill. The company specializes in providing technology solutions for enterprises that cannot afford to make mistakes, emphasizing speed and scalability.

Bhakta brings over two decades of experience in technology leadership, having previously built and managed technology at major organizations such as HBO, Pearson, and AT&T. These companies are known for their complex environments where failure is not an option.

He joins a distinguished board that includes Rajiv Pant, former CTO of The New York Times and technology leader at The Wall Street Journal, Condé Nast, and Hearst.

“I’m excited to join Flatiron Software’s board at such a pivotal moment for the industry,” Bhakta stated. “The company has built a strong foundation for helping organizations navigate AI-driven transformation, and I look forward to contributing my experience to accelerate that impact.”

Bhakta’s appointment is part of Flatiron’s strategic investment in building a board equipped to guide the company through its next growth phase. As demand for AI-augmented software development and strategic technology consulting increases, Flatiron is positioning itself with leadership that has not only witnessed digital transformation but has also driven it.

Currently, Bhakta leads Condé Nast’s global technology and product strategy. Throughout his career, he has transformed how large organizations build and deliver technology. His expertise includes scaling engineering teams, modernizing digital infrastructure, and fostering conditions for sustained innovation.

Bhakta has a proven track record of overseeing global teams of over 1,000 engineers and managing technology budgets exceeding $250 million. His approach consistently emphasizes measurable business outcomes rather than technology for its own sake.

At HBO, he was instrumental in building and leading the end-to-end digital media supply chain that powered HBO GO and HBO NOW. This mission-critical operation required both deep technical expertise and sharp strategic judgment.

During his tenure at Pearson, Bhakta spearheaded the company’s digital transformation, successfully transitioning it from a traditional publishing giant to a platform-first, cloud-native organization. Across all his roles, Bhakta has maintained a focus on making technology work harder for the business and the people it serves.

His extensive experience and strategic insight are expected to play a crucial role in Flatiron Software’s continued growth and innovation in the rapidly evolving technology landscape, according to a media release.

The announcement of Bhakta’s appointment underscores Flatiron’s commitment to enhancing its leadership infrastructure as it navigates the complexities of the AI-driven market.

For more information, refer to The American Bazaar.

Android Addresses 129 Security Vulnerabilities in Major Update

Google’s latest Android update addresses 129 security vulnerabilities, including a zero-day flaw linked to Qualcomm chips that has already been exploited in targeted attacks.

Google has rolled out a significant Android update that fixes a total of 129 vulnerabilities, including a critical zero-day flaw associated with Qualcomm chips that has already been exploited in attacks.

For many users, Android security updates often go unnoticed until a headline like this emerges. Suddenly, the device used for messaging, banking, and work becomes part of a broader cybersecurity narrative. This week, Google’s latest Android security updates have highlighted the importance of timely software maintenance.

Among the vulnerabilities addressed, one particular flaw has caught the attention of security researchers. Tracked as CVE-2026-21385, this zero-day vulnerability is concerning because it has already been utilized in targeted attacks. Attackers discovered this flaw before many devices had received a fix, which poses a significant risk to users.

The issue is linked to the graphics processing component in many Qualcomm chipsets. Specifically, it involves an integer overflow, a type of calculation error that can lead to memory corruption within the system. Once this occurs, attackers may gain unauthorized access to the device.

Qualcomm has indicated that this flaw affects 235 different chipsets, meaning a wide range of Android phones could potentially be impacted. Google’s Threat Analysis Group identified the issue and reported it through coordinated disclosure practices, prompting Qualcomm to collaborate with device manufacturers to implement necessary patches.

The implications of this Android security vulnerability are serious. Several of the patched vulnerabilities allow attackers to execute code remotely or gain elevated privileges on a device. One particular flaw within the Android System component is especially alarming, as it could enable remote code execution without any user interaction. This means an attacker could exploit the flaw without requiring the victim to click a link or install an app, making it one of the most dangerous types of vulnerabilities.

The March Android security bulletin addresses ten critical flaws across the System, Framework, and Kernel components. These core components are essential to Android’s functionality, so any weaknesses can have widespread repercussions across millions of devices.

Google has released two patch levels for this update. The second update encompasses everything in the first, in addition to fixes for extra hardware components and third-party software. Google Pixel devices typically receive updates immediately, while many other Android users may experience delays.

Phone manufacturers such as Samsung, Motorola, and OnePlus often need to test the patches before they are released for specific models. Additionally, carriers may delay updates to ensure compatibility. Consequently, some users receive security patches promptly, while others may have to wait weeks.

To protect your Android phone from security threats, there are several proactive steps you can take. First, install Android updates as soon as they become available. Regularly check for updates by navigating to Settings, tapping on Security and Privacy or Software Update, and selecting Check for Updates.

Second, avoid downloading apps from unknown sources. Stick to trusted stores like Google Play, as third-party app stores can pose a higher risk of malware.

Third, keep Google Play Protect enabled. This built-in malware protection scans apps for malicious behavior and alerts you to any suspicious activity. However, it is important to note that Google Play Protect is not infallible. Therefore, consider using robust antivirus software for an additional layer of protection.

Additionally, set a strong passcode on your phone and enable fingerprint or face unlock features if available. This helps safeguard your device in case it is lost or stolen. Lastly, exercise caution with suspicious links, as many attacks begin with phishing messages. Avoid clicking on unknown links in texts, emails, or social media messages.

This recent Android update underscores the complexities of modern mobile security. Google’s Threat Analysis Group frequently uncovers vulnerabilities that may already be exploited in real-world scenarios. These findings trigger coordinated responses involving chip manufacturers, device makers, and security researchers. In this instance, Qualcomm received the report in December and provided fixes to device manufacturers in early 2026.

While the process may appear slow from the outside, it involves numerous companies collaborating to prevent widespread exploitation. Security updates may not seem exciting, but they are crucial for protecting billions of smartphones globally.

This latest Android update serves as a stark reminder of the importance of timely software updates. A zero-day flaw linked to Qualcomm graphics hardware was already being targeted before many users were even aware of its existence. Installing updates promptly is one of the simplest yet most effective ways to protect your device and personal data.

So, the next time your Android device prompts you to install a security patch, consider this: Do you install it immediately, or do you tap “remind me later”?

For further information, consult CyberGuy.com.

Drone Technology and AI Transforming Modern Warfare Tactics

Artificial intelligence and advanced computer vision are revolutionizing drone capabilities, reshaping modern warfare, and redefining the dynamics of the battlefield.

As an ophthalmologist and technology commentator, I have been captivated by the transformative impact of artificial intelligence (AI) and computer vision on drone technology and its implications for modern warfare. In this new era of conflict, the advantage lies not solely with the largest bombers or stealth fighters, but with drones that possess the ability to see and act with superhuman precision.

Unmanned aerial vehicles (UAVs), once merely remote-controlled flying cameras, have evolved into autonomous warriors. Their vision systems, powered by AI, are now central to defining military strategy, tactics, and geopolitical maneuvers. This transformation is particularly evident in the ongoing conflict in Iran, where drones have inundated the airspace, turning it into a contested battlefield dominated by AI-driven vision and autonomous targeting.

The evolution of drones has been remarkable. From the early days of unmanned flight, which began with Austrian explosive balloons in 1849, to the World War I Kettering Bug and the mass-produced Radioplane OQ-2, the groundwork for contemporary aerial systems was laid. By the 1970s, platforms like Israel’s Tadiran Mastiff showcased the potential of real-time video surveillance. Today, drones operate across both civilian and military domains, transitioning from passive cameras to intelligent agents capable of interpreting their surroundings, making decisions, and executing complex missions.

The integration of AI and computer vision has revolutionized drone capabilities. Modern drones can autonomously avoid collisions, detect and track objects, navigate intricate environments, and create three-dimensional maps for mission planning. In military contexts, these vision systems facilitate real-time reconnaissance, target identification, adaptive mission execution, and swarm tactics that can overwhelm defenses. By combining rapid data processing with autonomous decision-making, drones extend human perception, operate in hazardous conditions, and perform tasks that would be perilous for human operators.

Human vision is remarkably sophisticated, adapting instantly to varying light conditions, interpreting depth and motion, and integrating context, memory, and experience to recognize patterns and make quick decisions. Soldiers spotting camouflage, pilots navigating shifting terrain, and commanders assessing intent rely on these faculties daily. In contrast, drone vision is engineered for speed, scale, and consistency. Modern drones utilize AI-powered systems that combine high-resolution cameras, infrared sensors, and sometimes LIDAR to capture visual data. Neural networks analyze this information in real-time, detecting objects, calculating movement, and predicting hazards.

Unlike humans, drones can track hundreds of objects simultaneously, operate in total darkness or inclement weather, and process inputs in milliseconds. While humans excel at interpretation, drones dominate in relentless detection and rapid reaction.

At the heart of today’s military drones is computer vision. Cameras, infrared sensors, and LIDAR feed streams of visual data into convolutional neural networks (CNNs) and other AI models that classify targets, estimate distances, and prioritize threats. This data fusion creates three-dimensional maps for navigation, obstacle avoidance, and autonomous target tracking. In conflict zones like Iran, this capability allows drones to detect incoming threats, evade counter-fire, and hunt other drones with minimal human oversight. Unlike human eyes, which interpret context and cues, drone AI converts raw pixels into actionable intelligence at speeds unmatched by human operators.

The use of low-cost attack drones in swarms by Iran has posed significant challenges to traditional U.S. and allied air defenses. These drones employ a saturation tactic: deploying hundreds of inexpensive, autonomous drones equipped with vision systems that can overwhelm radar and missile batteries, forcing costly interceptors to neutralize relatively low-cost threats. This has prompted the U.S. and Gulf allies to adopt AI-powered interceptors and collaborate with Ukraine, which has pioneered similar drone countermeasures during its conflict with Russia. Expertise from Ukraine is now in high demand as nations scramble to defend against Iran’s swarm drone tactics. Drone vision has evolved into a force multiplier, a shield, and a weapon all in one.

Despite the sophistication of AI-powered drone vision, human oversight remains crucial. Human perception brings context, ethical reasoning, and intuition that machines cannot replicate. Commanders must interpret intent, weigh collateral impact, and make strategic decisions. However, drones increasingly blur the line: AI vision enables autonomous detection, tracking, and engagement, performing in milliseconds what would take humans much longer. The result is a battlefield where the ability to see first and act fastest can decisively alter outcomes.

Current drones that rely on computer vision and machine learning still face limitations in context and interpretation, which highlight the challenges of today’s AI models. While AI systems excel at recognizing visual patterns, they often lack a deeper understanding of meaning, intent, and cultural context. For instance, a neural network trained to identify buildings might classify structures based on shapes or rooftops, but a school, mosque, temple, hospital, or apartment complex can appear visually similar from the air. Without additional contextual data—such as signage, activity patterns, or human oversight—the model may misclassify a building, particularly in conflict zones where training data may be limited or biased.

Another limitation is that AI models struggle with generalization and ambiguity. Many vision systems are trained on large datasets, but these datasets may not encompass the diversity of buildings, cultural architecture, or real-world conditions found in conflict zones. A mosque dome might be mistaken for another round structure, or a school playground might be confused with a public courtyard. Models can also fail when buildings are partially damaged, obscured by smoke or shadows, or when viewing angles change.

Because neural networks rely on statistical patterns rather than true understanding, they can make confident but incorrect predictions, underscoring the need for human oversight in military drone operations. These limitations highlight a key challenge in AI vision: recognizing objects is not the same as understanding their significance in the real world.

China currently dominates the global drone manufacturing market, producing the majority of commercial and consumer unmanned aerial vehicles and supplying key technologies that have shaped global markets. Government-backed industrial policy and subsidies have enabled Chinese firms to control approximately 90% of the global consumer drone market and over 70% of enterprise drones. In contrast, India is emerging as one of the fastest-growing drone markets in the Asia-Pacific region, with projected market value expected to rise from hundreds of millions to several billion dollars over the next decade. While Indian manufacturers are scaling up and benefiting from innovation, much of the current supply chain still relies on imported components, and local production has not yet reached the level of China’s integrated drone ecosystem.

In the defense sector, the United States is rapidly working to catch up, particularly as drones play an increasingly central role in conflicts like the Iran war. High-profile private investment is now intertwined with national strategy, as evidenced by Eric Trump and Donald Trump Jr. backing a domestic drone venture called Powerus, which aims to supply advanced autonomous systems to the Pentagon amid rising military demand and bans on Chinese imports.

To enhance drone capabilities, significant improvements in vision systems are necessary. Drones require better three-dimensional perception and depth understanding to navigate safely through complex environments without GPS. Enhanced object recognition in low light, adverse weather, smoke, or partial obstructions will enable them to operate where humans and current sensors struggle. Drones also need real-time scene understanding to interpret context—distinguishing civilians from combatants, moving vehicles from obstacles, or recognizing dangerous areas—and long-range visual tracking to follow multiple moving targets and predict their movements.

Integrating AI-powered autonomous decision-making will allow drones to interpret complex visual data and make mission-critical choices without human input. Swarm coordination and distributed vision will enable groups of drones to share visual information, create a unified environmental map, detect threats collectively, and execute coordinated strategies. Miniaturization and energy-efficient computing will allow drones to carry these advanced vision systems without sacrificing flight time or maneuverability, unlocking fully autonomous and intelligent flight in challenging environments.

In this new reality, dominance in the sky is defined not just by the size of the aircraft fleet but by the effectiveness of drones in seeing, interpreting, and responding to threats. AI-driven drone vision has become the defining edge in modern warfare, and countries that fail to integrate these advancements risk falling behind.

The ongoing conflict in Iran illustrates a broader trend: nations now face adversaries capable of deploying swarms of low-cost, AI-guided drones that can evade defenses and strike critical targets. Vision-powered drones are prompting a reevaluation of air power, air defense, and tactical doctrine.

According to The American Bazaar, the future of warfare will increasingly hinge on the capabilities of intelligent drones and their vision systems.

Former Meta AI Scientist Secures Over $1 Billion for Human-Centric AI

A former Meta AI scientist has raised over $1 billion to advance artificial intelligence systems that prioritize human-like reasoning and understanding.

A former Meta AI scientist has successfully secured significant funding to support his mission of making artificial intelligence (AI) more human-centric. Advanced Machine Intelligence, a startup founded by Yann LeCun, the former chief AI scientist at Meta Platforms, announced on Tuesday that it has raised $1.03 billion based on a pre-money valuation of $3.50 billion. The company aims to commercialize AI systems that focus on reasoning, planning, and developing “world models.”

Yann André LeCun is a prominent French-American computer scientist recognized for his pivotal contributions to the field of artificial intelligence. Born on July 8, 1960, in France, LeCun earned his engineering diploma and later obtained a PhD, embarking on a distinguished career in AI research. He is particularly known for his foundational work in deep learning, including the development of convolutional neural networks (CNNs), which have become essential in modern computer vision, image recognition, and machine learning. In recognition of his contributions, LeCun shared the 2018 ACM Turing Award with fellow AI pioneers Yoshua Bengio and Geoffrey Hinton, marking a significant milestone in the evolution of AI technology.

LeCun joined Facebook, now known as Meta Platforms, in 2013, where he co-founded the Facebook AI Research (FAIR) lab. He later served as Meta’s Chief AI Scientist, guiding long-term research and innovation in the field. In addition to his industry work, LeCun holds academic positions, including a professorship at New York University, where he continues to teach and conduct research.

The recent funding round for Advanced Machine Intelligence was co-led by notable investors, including Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Such substantial investments indicate strong market confidence in technologies that aim to expand AI capabilities beyond mere pattern recognition, venturing into areas such as reasoning, planning, and understanding complex systems.

Advanced Machine Intelligence is strategically targeting organizations that operate complex systems, including manufacturers, automakers, aerospace companies, biomedical firms, and pharmaceutical groups. “We want to become the main provider of intelligent systems, regardless of what the application is,” LeCun stated, emphasizing the company’s ambitious goals.

This development aligns with a broader trend within the AI industry, reflecting a shift toward creating systems that can model and interpret the real world in a manner that mimics human understanding. These “world-model” approaches have the potential to enhance AI adaptability and usefulness in high-stakes or unpredictable environments. By integrating reasoning and planning capabilities into AI systems, the company aims to accelerate automation in critical sectors, improve problem-solving in complex scenarios, and foster more sophisticated human-machine collaboration.

From an economic standpoint, the significant venture funding directed toward projects like Advanced Machine Intelligence underscores the strategic importance of AI as both a technological and competitive asset. Organizations and industries that effectively adopt advanced AI tools may experience substantial advantages in productivity, innovation, and decision-making.

The future of AI appears poised for transformation as companies like Advanced Machine Intelligence work to create systems that not only perform tasks but also understand and navigate the complexities of the world in a more human-like manner. This evolution could redefine the landscape of artificial intelligence and its applications across various sectors.

According to The American Bazaar, this funding marks a significant step forward in the quest to develop AI technologies that are more aligned with human reasoning and understanding.

Researchers Identify Source of Black Hole’s 3,000-Light-Year Jet Stream

A new study connects the M87 black hole to its powerful cosmic jet, revealing how it launches particles at nearly the speed of light.

A recent study has successfully linked the renowned M87 black hole, the first black hole ever captured in an image, to its impressive cosmic jet. This research sheds light on the mechanisms behind the black hole’s ability to launch particles at nearly the speed of light.

Published in the journal “Astronomy & Astrophysics,” the findings reveal that scientists have traced a 3,000-light-year-long cosmic jet back to its likely source point. This breakthrough was made possible through “significantly enhanced coverage” provided by the global Event Horizon Telescope network.

M87, a supermassive black hole located in the Messier 87 galaxy, is approximately 55 million light-years from Earth and boasts a mass 6.5 billion times that of the sun. The first image of M87 was unveiled to the public in 2019, following data collection by the Event Horizon Telescope in 2017.

Dr. Padi Boyd of NASA highlighted the significance of the discovery, noting that M87 is not only supermassive but also active. “Just a few percent are active at any given time,” she explained in a video about the black hole. “Are they turning on and then turning off? That’s an idea… We know there are very high magnetic fields that launch a jet. This image provides observational evidence that what we’ve been seeing for a while is actually being launched by a jet connected to that supermassive black hole at the center of M87.”

The black hole is known to consume surrounding gas and dust while simultaneously emitting powerful jets of charged particles from its poles, which form the extensive jet stream. This dual behavior has been reported by outlets such as Scientific American and Space.com.

Saurabh, the team leader at the Max Planck Institute for Radio Astronomy, stated, “This study represents an early step toward connecting theoretical ideas about jet launching with direct observations.” He emphasized the importance of identifying the jet’s origin and its connection to the black hole’s shadow, calling it a crucial piece in understanding how the central engine operates.

The Event Horizon Telescope is a global network of eight radio observatories that work together to detect radio waves emitted by astronomical objects, such as galaxies and black holes. This collaboration effectively creates an Earth-sized telescope capable of capturing detailed images and data.

The term “Event Horizon” refers to the boundary surrounding a black hole beyond which no light can escape, as defined by the National Science Foundation.

The recent findings stem from data collected by the Event Horizon Telescope in 2021. However, the authors of the study caution that while the results are robust under the assumptions and tests performed, definitive confirmation and more precise constraints will necessitate future observations with higher sensitivity. This will require additional stations and an expanded frequency range to improve intermediate-baseline coverage.

As researchers continue to explore the mysteries of black holes, these findings represent a significant advancement in our understanding of how these cosmic giants operate and influence their surroundings, according to Space.com.

Fake Google Gemini AI Promotes ‘Google Coin’ Cryptocurrency Scam

Scammers are leveraging a fake AI chatbot impersonating Google’s Gemini to promote a fraudulent cryptocurrency called “Google Coin,” according to researchers from Malwarebytes.

In an alarming development in the world of cryptocurrency scams, security researchers at Malwarebytes have uncovered a fraudulent website promoting a non-existent cryptocurrency called “Google Coin.” This site features a chatbot that falsely claims to be Google’s Gemini AI, designed to lure unsuspecting investors into making cryptocurrency payments.

The scam operates under the guise of an official Google product, complete with familiar branding and visuals that create an illusion of legitimacy. Visitors to the site interact with a chatbot that introduces itself as “Gemini, your AI assistant for the Google Coin platform.” This interaction is crafted to convince users they are engaging with a credible Google service.

When users pose investment-related questions, the chatbot responds with specific financial projections, claiming that purchasing 100 tokens at $3.95 each could yield returns exceeding $2,700 once the coin is “listed.” The site employs deceptive tactics, such as fake progress counters and countdowns, to create a sense of urgency and credibility. Once a user clicks “Buy,” they are directed to send Bitcoin to a specified wallet address, with the transaction being final and irreversible.

It is crucial to note that there is no official “Google Coin.” The entire operation is a sophisticated scheme designed to siphon cryptocurrency from unsuspecting individuals. This scam effectively combines brand impersonation with artificial intelligence to enhance its credibility. The scammers have meticulously crafted a website that mimics Google’s aesthetic, employing logos and technical jargon that further mislead potential victims.

The chatbot is programmed with a tightly controlled script, confidently answering inquiries while avoiding any admission of risk. If users inquire about company registration or regulatory compliance, the chatbot deflects with vague assurances regarding security and transparency. This interaction is not with a clumsy scammer but with software engineered to persuade users around the clock. The chatbot can simultaneously engage with hundreds of individuals, providing personalized responses and nudging them toward sending cryptocurrency.

The interactive nature of this scam poses a significant risk, as it can lower users’ defenses. When a chatbot responds in real time, it can create an illusion of professionalism and reliability. Many individuals may think, “If this were fake, it wouldn’t sound so convincing.” However, this is precisely the tactic employed by scammers to instill confidence.

For those who fall victim to this scheme, the financial repercussions can be immediate and irreversible. Unlike credit card transactions, cryptocurrency payments cannot be reversed. There is no customer support line to contact, and no refund process available. Furthermore, engaging with a scam site may result in personal information, such as email addresses and wallet details, being circulated among fraud networks, increasing the likelihood of future scams targeting the victim.

Researchers at Malwarebytes emphasize the growing sophistication of crypto scams, particularly those utilizing AI tools to create polished and seemingly legitimate investment opportunities. However, there are steps individuals can take to mitigate their risk before investing or sending any digital currency.

First and foremost, if a cryptocurrency claims to be launched by a well-known company, it is essential to verify the information directly on the company’s official website. Major corporations typically announce significant financial products publicly. If confirmation cannot be found on the legitimate domain, it is prudent to assume the offering is fraudulent and to walk away.

Additionally, any investment that promises guaranteed returns or specific future prices should raise red flags. Real investments inherently carry risks and uncertainties, and promises of quick, predictable profits are classic indicators of scams.

Utilizing a password manager can also enhance security by generating strong, unique passwords for each account and securely storing them. This precaution can prevent scammers from accessing other accounts if they manage to trick users into providing credentials on a fake site. Many password managers also alert users if their information appears in known data breaches.

Employing robust antivirus software is another layer of protection, as it can help detect malicious websites, phishing attempts, and suspicious downloads before they can cause harm. This can prevent hidden malware from being installed while users are distracted by convincing scam pitches.

Identity theft protection services can monitor personal information, such as Social Security numbers and email addresses, alerting users if their data is being misused. If scammers collect personal details through a fraudulent investment site, early alerts can facilitate prompt action to mitigate financial damage.

Data removal services can assist in removing personal information from public data broker sites. The less information available online, the harder it becomes for scammers to target individuals with personalized pitches. Reducing one’s digital footprint can significantly lower exposure to fraud.

Before sending any cryptocurrency, it is advisable to pause and independently verify the recipient. Searching for reviews, warnings, and official announcements can help identify potential scams. If an investment opportunity creates a sense of urgency, such as countdowns or “final stage” messages, this should be treated as a warning sign.

As scammers increasingly employ sophisticated tactics, including artificial intelligence, to create polished and persuasive narratives, awareness remains a powerful tool. By taking a moment to verify claims, question guaranteed returns, and utilize protective tools, individuals can significantly reduce their risk of falling victim to scams.

For more information on this issue, refer to the findings from Malwarebytes.

Meta Smart Glasses Face Increasing Privacy Concerns Among Users

Meta’s AI smart glasses have raised significant privacy concerns after reports revealed that contractors in Kenya may have viewed sensitive footage captured by the devices.

Meta’s AI smart glasses, designed to seamlessly integrate technology into daily life, are facing serious scrutiny following allegations of privacy violations. An investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that contractors reviewing AI data in Nairobi, Kenya, may have accessed highly personal footage captured by the smart glasses. This footage reportedly includes intimate moments such as bathroom visits and sexual activity, raising alarms about user privacy and the ethical implications of AI training.

The controversy stems from the role of AI annotators—workers who review images, videos, or audio to help artificial intelligence systems learn and improve. These annotators play a crucial role in training AI by labeling content and verifying responses. According to the investigation, some of these workers have reported viewing videos recorded by Meta’s smart glasses, which can include sensitive scenes from everyday life. One annotator described seeing everything from living rooms to naked bodies, while another noted that although faces are supposed to be automatically blurred, this feature sometimes fails, leaving identities exposed. Additionally, some clips allegedly revealed credit cards and other sensitive information.

Many users may assume that AI systems learn autonomously, but human input is often essential for their development. Meta’s smart glasses feature an AI assistant that responds to user inquiries about their surroundings, such as identifying landmarks or explaining objects. To ensure accuracy, the system sometimes relies on training data reviewed by human contractors.

In response to the allegations, a Meta spokesperson stated, “Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.” The spokesperson added that when users do share content, contractors may review this data to enhance user experience, a practice common among many tech companies. Meta claims to implement measures to filter data and protect user privacy.

The Ray-Ban Meta glasses are equipped with an LED indicator light that activates when photos or videos are being recorded, alerting those nearby that content is being captured. Furthermore, the company’s terms of service emphasize that users are responsible for adhering to applicable laws and using the glasses in a respectful manner, which includes avoiding harassment and respecting privacy rights.

Meta has also been in contact with Sama, a company that provides AI data annotation services. According to Meta, Sama has stated it is unaware of any workflows involving the review of sexual or objectionable content or instances where faces or sensitive details remain unblurred. Meta is continuing to investigate the matter.

This controversy arises as Meta expands the capabilities of its AI glasses, developed in collaboration with eyewear giant EssilorLuxottica. The glasses, which include a camera and an AI assistant, have seen a surge in sales, with reports indicating over 7 million pairs sold in 2025—a significant increase compared to previous years. However, alongside this growth, Meta has updated its privacy policies, including changes that keep AI camera features active unless users disable the “Hey Meta” voice command and remove the option to opt out of storing voice recordings in the cloud. For privacy advocates, these updates heighten concerns regarding user data protection.

The recent findings underscore a critical reality for users of smart glasses and similar wearable technology: AI devices often collect more information than users may realize. When users share content with AI systems, human reviewers may analyze that material to improve the technology, meaning that footage captured by users could be viewed by others during the training process. Moreover, wearable cameras can inadvertently record private moments, and while companies implement tools to blur faces or obscure identifying details, these systems are not infallible. As privacy policies evolve with the introduction of new AI features, staying informed about these changes is essential for users to assess their comfort level with the technology.

As smart glasses transition from novelty items to everyday gadgets, the appeal of having AI assist in understanding the world around us is undeniable. However, the same technology that enhances these devices also raises complex privacy issues. The presence of always-accessible cameras, AI systems that learn from real-world footage, and human reviewers involved in training these systems create a data chain that many users may not fully consider.

This raises a pivotal question: Would you feel comfortable wearing AI glasses knowing that someone, potentially halfway around the world, might review the footage your device captures? The implications of such technology warrant careful consideration as we navigate the intersection of innovation and privacy.

For further insights and updates on technology and privacy, visit CyberGuy.com.

Spectacular Blue Spiral Light Likely Originates from SpaceX Rocket

A stunning blue spiral light, likely from a SpaceX Falcon 9 rocket, illuminated the night sky over Europe, captivating viewers and sparking discussions on social media.

A mesmerizing blue light, reminiscent of a cosmic whirlpool, lit up the night sky over Europe on Monday. This extraordinary phenomenon was captured in striking video footage and is believed to have been caused by the SpaceX Falcon 9 rocket booster re-entering the Earth’s atmosphere.

The time-lapse video, recorded in Croatia around 4 p.m. EST (9 p.m. local time), showcases the glowing spiral as it traverses the sky. Many social media users compared the sight to a spiral galaxy, highlighting its ethereal beauty. The full video, when played at normal speed, lasts approximately six minutes.

The Met Office in the U.K. reported receiving numerous accounts of an “illuminated swirl in the sky.” Experts indicated that the light was likely a result of the SpaceX rocket, which had launched from Cape Canaveral, Florida, at around 1:50 p.m. EST as part of the classified NROL-69 mission for the National Reconnaissance Office (NRO), the U.S. government’s intelligence and surveillance agency.

“This is likely to be caused by the SpaceX Falcon 9 rocket, launched earlier today,” the Met Office stated on X. “The rocket’s frozen exhaust plume appears to be spinning in the atmosphere and reflecting the sunlight, causing it to appear as a spiral in the sky.”

This glowing spectacle is often referred to as a “SpaceX spiral,” according to Space.com. Such spirals occur when the upper stage of a Falcon 9 rocket separates from its first-stage booster. As the upper stage continues its journey into space, the lower stage descends back to Earth, releasing any remaining fuel. This fuel freezes almost instantly at high altitudes, and sunlight reflects off the frozen particles, creating the striking glow observed in the sky.

Fox News Digital reached out to SpaceX for further comment but did not receive an immediate response. The stunning display in the sky came just days after a SpaceX team, in collaboration with NASA, successfully returned two astronauts who had been stranded in space.

The captivating blue spiral not only delighted onlookers but also served as a reminder of the intricate and often spectacular phenomena associated with space exploration and rocket launches. As technology continues to advance, such displays may become more common, sparking curiosity and wonder among those who gaze upward.

According to Space.com, these phenomena highlight the remarkable interplay between human ingenuity and the natural world, as we continue to push the boundaries of what is possible in space travel.

Beware of Extortion Scam Emails Claiming Your Data Is Compromised

Experts warn that extortion scam emails claiming hackers have stolen personal data are flooding inboxes, preying on fear and urgency to manipulate victims into paying ransoms in Bitcoin.

In recent weeks, a wave of extortion scam emails has inundated inboxes across the globe, with scammers claiming to have stolen sensitive personal information. These emails often create a sense of urgency and fear, leaving recipients feeling vulnerable and anxious about their digital security.

One reader, Bobby D, reached out after receiving a particularly alarming message. “I received the attached email, and I’m wondering what to do. I have the capability to mark it as Spam with my email provider, Earthlink. Because of its threatening nature, is there any other type of action you can recommend?” he asked. “I was wondering if just designating it as spam, there really would be no deterrence for the sender?”

The content of these emails is designed to unsettle recipients. They often claim to possess complete personal information, threatening to sell it on the dark web unless a ransom—typically demanded in Bitcoin—is paid quickly. The message may read something like, “I have your complete personal information… I will send this package to dark net markets… Or you can buy it from me for 1000 USD in Bitcoin…”

If this scenario sounds familiar, you are not alone. These extortion emails are part of a widespread campaign targeting thousands of individuals. The messages are crafted to sound credible and detailed, but upon closer inspection, the warning signs become apparent.

Scammers often fail to provide any concrete evidence of their claims. There are no screenshots, passwords, or files attached to substantiate their threats. Instead, they rely on vague phrases like “a multitude of files” and “your devices,” which sound dramatic but lack specificity. In contrast, legitimate data breaches typically include detailed information.

Moreover, any email demanding payment in Bitcoin while advising recipients not to inform anyone follows a classic scam formula. Reputable companies do not operate in this manner. It is crucial to understand that these emails are not personal attacks; they are mass-produced messages sent to countless addresses simultaneously, with the hope that a small percentage of recipients will be frightened enough to comply.

It is essential to recognize that your email address may have appeared in a previous data breach, but this does not mean that your devices or accounts have been compromised. Scammers purchase lists of leaked emails and send out these threatening messages in bulk. Even a single successful payment can make the entire operation profitable for them.

If you receive one of these emails, here is the recommended course of action:

Do not respond. Engaging with the sender confirms that your email address is active, which may lead to further threats.

Do not pay the ransom. Paying does not guarantee your safety; it only indicates that the scam has worked.

Instead, flag the email as spam with your email provider, such as EarthLink. This action helps train spam filters and reduces the likelihood of similar messages reaching you and others in the future. Once reported, delete the email and move on. To Bobby’s question, marking it as spam is indeed helpful. While it may not stop the individual sender, it contributes to the broader effort to combat these scams.

While it is impossible to prevent scammers from attempting to exploit individuals, there are steps you can take to protect yourself. Reusing passwords across multiple accounts increases the risk associated with data breaches. Utilizing a password manager can help you create and store strong, unique passwords for each of your accounts.

Additionally, check if your email has been exposed in past breaches. Some password managers include built-in breach scanners that can alert you if your information has been compromised. If you find that your email or passwords have appeared in known leaks, change any reused passwords immediately and secure those accounts with new, unique credentials.

Implementing two-factor authentication (2FA) adds an extra layer of security, even if your password is leaked. Regular updates to your software and applications can also close security gaps that scammers exploit.

Consider using data removal services to limit the amount of personal information available online. By reducing the information accessible to scammers, you make it more challenging for them to cross-reference data from breaches with what they may find on the dark web.

Never click on links in threatening emails. Strong antivirus software can help block malicious sites and fake support pages. The best way to protect yourself from harmful links that could install malware is to ensure you have robust antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, safeguarding your personal information and digital assets.

Scam emails thrive on panic and urgency. Taking a moment to verify the legitimacy of a message can diminish its power. Many people question whether marking these emails as spam is effective. It is. Spam reports assist email providers in identifying patterns, blocking sender networks, and reducing future scam attempts. While you may not stop the individual scammer, your actions contribute to the protection of others.

Ultimately, extortion scam emails succeed by exploiting fear. They aim to prompt quick, unconsidered actions. By pausing to question the message and verifying its authenticity, you can defuse the threat. No files have been stolen, and no devices have been hacked—just a recycled script designed to instill fear. If you have received one of these emails, you have done the right thing by stopping and seeking advice.

Have you ever encountered a threatening email that initially caused you distress before you realized it was a scam? What helped you identify it, or what would you do differently next time? Share your experiences with us at Cyberguy.com.

According to CyberGuy.com, staying informed and vigilant is the best defense against these types of scams.

Pentagon’s AI Initiatives: A New Frontier in Defense Technology

The Pentagon’s ongoing battle over artificial intelligence will significantly influence the future of military technology and its implications for global power dynamics.

The Fox News AI Newsletter highlights the latest advancements in artificial intelligence technology, focusing on the challenges and opportunities that AI presents both now and in the future.

In this edition, we explore the Pentagon’s ongoing AI battle, which is poised to determine who controls the most powerful military technologies. As AI continues to evolve, its integration into defense systems raises critical questions about security, ethics, and global power dynamics.

Additionally, researchers at Imperial College London are developing an innovative AI-powered T-shirt designed to monitor heart health over extended periods. This groundbreaking garment aims to detect inherited heart rhythm disorders that often go unnoticed until they pose significant health risks.

In an opinion piece, Margaret Spellings emphasizes the urgency for American schools to prepare students for an AI-driven future. She notes that the rapid pace of technological change is reshaping the workforce and economy, leaving educational systems struggling to keep up.

Steve Forbes also weighs in, arguing that the nation that establishes the standards for AI will shape the future. He warns that while America has historically set the rules in various industries, China is poised to take the lead in the AI arena.

On the digital front, Microsoft has announced a new technical blueprint aimed at verifying the authenticity of online content. This initiative comes in response to the growing prevalence of misleading information on social media platforms.

In a significant move, major tech companies have backed President Donald Trump’s Ratepayer Protection Pledge, committing to absorb the costs associated with running energy-intensive AI data centers. This agreement, which includes companies like Google, Microsoft, and Amazon, aims to prevent these expenses from being passed on to consumers.

Moreover, new policies on the social media platform X are set to penalize creators who share AI-generated videos of armed conflicts without proper disclosure. This initiative seeks to combat misinformation and manipulation in online content.

Lastly, X’s AI chatbot, Grok, has begun rolling out its beta version, Grok 4.20. Elon Musk and the X team claim this update will enhance performance and introduce new features while aiming to minimize perceived political bias.

The debate surrounding the energy consumption of data centers continues to grow, as these facilities are crucial for powering AI, search engines, and various online services that people rely on daily.

Stay informed about the latest advancements in AI technology and the challenges and opportunities it presents by following the Fox News AI Newsletter.

According to Fox News, the implications of AI technology are vast and multifaceted, impacting everything from military strategy to personal health monitoring.

Wolf Species Made Famous in ‘Game of Thrones’ Revived, Company Claims

A Dallas-based company claims to have resurrected the dire wolf, an extinct species made famous by “Game of Thrones,” using advanced genetic technologies.

A Dallas-based biotechnology company, Colossal Biosciences, has announced that it has successfully brought back the dire wolf, a species that last roamed the Earth over 12,500 years ago. The dire wolf gained popularity through the hit HBO series “Game of Thrones,” where it is depicted as a larger and more intelligent version of the common wolf, fiercely loyal to the Stark family.

Colossal Biosciences asserts that it has created three dire wolves through genome-editing and cloning techniques, marking what it claims to be the world’s first successful “de-extinction” of an animal. However, some experts question the validity of this claim, suggesting that the company has merely genetically modified existing gray wolves rather than truly resurrecting the extinct species.

According to Colossal, dire wolves inhabited the American midcontinent during the Ice Age, with the oldest confirmed fossil dating back approximately 250,000 years, discovered in the Black Hills of South Dakota. The three new wolves include two adolescent males named Romulus and Remus, and a female puppy called Khaleesi.

The scientists at Colossal utilized blood cells from a living gray wolf and employed CRISPR technology—short for “clustered regularly interspaced short palindromic repeats”—to make genetic modifications at 20 different sites. These alterations were designed to replicate traits believed to have helped dire wolves survive in cold climates, such as larger body size and longer, lighter-colored fur. Of the 20 edits made, 15 correspond to genes found in actual dire wolves.

The ancient DNA used for the project was extracted from two dire wolf fossils: a tooth from Sheridan Pit, Ohio, estimated to be around 13,000 years old, and an inner ear bone from American Falls, Idaho, which dates back approximately 72,000 years. The modified genetic material was then transferred into an egg cell from a domestic dog. Afterward, the embryos were implanted into surrogate domestic dogs, leading to the birth of the genetically engineered pups 62 days later.

Ben Lamm, CEO of Colossal Biosciences, described the achievement as a significant milestone in the company’s efforts to demonstrate the effectiveness of its de-extinction technology. “It was once said, ‘any sufficiently advanced technology is indistinguishable from magic,’” Lamm stated. “Today, our team gets to unveil some of the magic they are working on and its broader impact on conservation.”

Colossal has previously announced similar projects aimed at genetically altering cells from living species to create animals resembling other extinct species, including woolly mammoths and dodos. In conjunction with the announcement of the dire wolves, the company also revealed the birth of two litters of cloned red wolves, which are critically endangered. This development, according to Colossal, demonstrates the potential of their de-extinction technology to aid in conservation efforts.

In late March, Colossal’s team met with officials from the U.S. Department of the Interior to discuss their projects. Interior Secretary Doug Burgum praised the work on social media, calling it a “thrilling new era of scientific wonder.” However, some scientists remain skeptical about the feasibility of restoring extinct species.

Corey Bradshaw, a professor of global ecology at Flinders University in Australia, expressed doubts regarding Colossal’s claims. “So yes, they have slightly genetically modified wolves, maybe, and that’s probably the best that you’re going to get,” Bradshaw remarked. “And those slight modifications seem to have been derived from retrieved dire wolf material. Does that make it a dire wolf? No. Does it make a slightly modified gray wolf? Yes. And that’s probably about it.”

Colossal Biosciences reports that the newly created wolves are thriving in a 2,000-acre ecological preserve in Texas, which is certified by the American Humane Society and registered with the USDA. Looking ahead, the company aims to restore the species in secure ecological preserves, potentially on indigenous lands.

As the debate continues regarding the ethical implications and scientific validity of de-extinction efforts, the work of Colossal Biosciences represents a bold step into the future of genetic engineering and conservation.

According to Fox News, the implications of such advancements could reshape our understanding of extinct species and their potential return to the ecosystem.

The Rise of Efficient AI: Balancing Energy Needs During the Boom

The emergence of ‘efficient’ or ‘green’ AI is reshaping the technology landscape, as companies strive to reduce energy consumption amid soaring demand for artificial intelligence.

A shift toward “efficient AI” is becoming a crucial competitive metric alongside performance and scalability in the rapidly evolving AI landscape. As the demand for AI technologies surges, companies are racing to develop models that consume significantly less electricity.

Vasudha Badri Paul, CEO of Avatara AI, emphasizes the importance of this trend, stating, “Companies that adopt an energy-first approach for AI are the future.”

As artificial intelligence becomes increasingly integrated into daily life—from search engines to business applications—a pressing concern has emerged regarding the growing energy footprint associated with these technologies. A recent report from TRG Datacenters sheds light on this challenge, revealing that leading AI developers are making strides to enhance the energy efficiency of their models.

Chris Hinkle, CEO of TRG Datacenters, notes the alarming trajectory of AI demand: “The math is simple but scary: AI demand is on track to quadruple by 2030, and our power grids just aren’t built for that speed. We’re hitting a physical wall where we can’t just build more data centers; we have to make the software stop being so ‘hungry.’”

The study conducted by TRG Datacenters examined major language models to assess how companies are saving energy amid the technology’s growth. The findings indicate a clear trend: the latest generation of AI models is becoming significantly more efficient, even as usage continues to rise. Many experts agree that enhancing the energy efficiency of AI systems is as vital as expanding their capabilities, particularly given the exponential growth in global demand.

Among the models analyzed, Grok 4.1 stands out for its efficiency gains, reducing energy consumption by 38 percent compared to its predecessor. Despite processing 134 million daily queries, Grok 4.1 decreased its power requirement from 0.55 watt-hours per query to 0.34. This improvement also lowered the average cost per request from $0.000098 to $0.000061, marking the most significant enhancement recorded in the study. Researchers have hailed it as “the most energy-efficient model in the world today.”

This trend reflects a broader movement within the technology sector toward what experts are calling Green AI, an approach focused on minimizing the environmental impact of large-scale artificial intelligence systems. Sridhar Verose, a council member in San Ramon and a technologist with over two decades of experience in cloud operations and digital transformation, underscores the necessity of this shift. “Green AI is driven by the need to reduce the rapidly growing energy demands of large-scale AI models. A multi-layered approach combines energy-efficient hardware, algorithmic efficiency, and specialized, smaller model architectures,” he explains.

The research also highlights Google’s Gemini 3, which ranks second in energy efficiency, achieving a 35 percent reduction in energy consumption. The model supports an estimated 850 million daily queries while maintaining the lowest cost per request in the ranking at just $0.000043. By cutting its power usage by more than a third, Gemini 3 demonstrates that large-scale AI systems can expand rapidly while keeping operating costs and electricity demand manageable.

Other leading AI systems have also reported significant improvements. Claude Opus 4.5 from Anthropic reduced electricity use by 27 percent while processing around 180 million daily queries. Meanwhile, the China-developed DeepSeek V3.2 improved efficiency by 25 percent while handling approximately 650 million daily queries.

The urgency for energy-efficient AI is escalating as global demand continues to rise. Data centers are already responsible for a growing share of electricity consumption, and the explosive growth of generative AI tools is expected to further accelerate this trend.

Vasudha Badri Paul reiterates the need for aligning AI development with climate considerations. “The need is to align computing with the future of climate by using stranded, wasted energy to power AI workloads. Companies that adopt an energy-first approach for AI are the future,” she asserts.

If the findings from the research are any indication, the coming years could see even more energy-efficient models. Efficiency gains of 30 percent or more from models such as Grok and Gemini signal meaningful progress in the field.

Hinkle also emphasizes that the shift toward efficiency is critical for sustaining the rapid growth of AI. “Seeing models like Grok or Gemini slash their energy use by 30% or more proves that we can actually make these systems smarter without just throwing more juice at them,” he states.

He further illustrates the impact of these efficiency improvements by referencing GPT-5.2, which achieved a 19 percent reduction across 2.5 billion daily hits, equating to enough energy savings to power an entire city for free. “This kind of ‘efficiency-first’ mindset is the only way we keep the lights on while the AI boom continues,” Hinkle concludes.

As the demand for AI technologies continues to rise, the push for energy-efficient solutions will be paramount in ensuring a sustainable future for artificial intelligence.

According to TRG Datacenters.

U.S. Introduces New Regulations for AI Chip Exports

The United States is considering new regulations for exporting artificial intelligence chips, potentially requiring foreign investments in U.S. data centers as a condition for large-scale exports.

The United States is contemplating the introduction of new rules governing the export of artificial intelligence (AI) chips. According to a document reviewed by Reuters, U.S. officials are in discussions about a regulatory framework that may require foreign nations to invest in U.S. AI data centers or provide security guarantees as a prerequisite for exporting 200,000 chips or more.

This initiative marks the first significant attempt to regulate the export of AI chips to U.S. allies and partners since the Trump administration rescinded the previous administration’s AI diffusion rules. Those earlier rules aimed to retain a substantial portion of AI infrastructure development within the U.S. and directed most purchases through a select group of American cloud computing companies.

Saif Khan, a former national security official in the Biden administration and now affiliated with the Institute for Progress, a Washington think tank, commented on the potential impact of the proposed regulations. “The rule could help the U.S. government address chip diversion to China and ensure a more secure buildout of the most powerful AI supercomputers,” he said. “However, the license requirements are overly broad, applying globally, which raises concerns that the administration intends to use these controls as negotiation leverage with allies rather than strictly for security purposes.”

If implemented, this proposal could provide the Trump administration with significant leverage in negotiating investments in the U.S., aligning with one of Trump’s key priorities as it determines the allocation of AI chips to various countries.

The U.S. Commerce Department has expressed its commitment to promoting secure exports of American technology. “We successfully advanced exports through our historic Middle East agreements, and there are ongoing internal government discussions about formalizing that approach,” the department stated.

The potential regulation of AI chip exports reflects a broader shift in the intersection of technology, national security, and economic strategy on the global stage. As AI technology becomes increasingly integral to commercial innovation and geopolitical influence, controlling the distribution of critical hardware serves not only to protect domestic interests but also to shape international partnerships.

Such measures could redefine the balance of power in AI development, encouraging foreign nations to collaborate closely with U.S. infrastructure and security frameworks. This approach aims to ensure that sensitive technology is not diverted in ways that could compromise strategic objectives.

Beyond immediate security concerns, this strategy underscores a growing recognition that advanced technologies are intertwined with economic and diplomatic leverage. By linking chip exports to investments or commitments in U.S.-based infrastructure, the U.S. could establish new standards for how technological ecosystems are developed, maintained, and shared globally.

This regulatory approach may foster more sustainable and accountable global tech development while enhancing the U.S.’s influence in shaping AI norms and safeguards.

The potential changes to AI chip export regulations highlight the evolving landscape of international technology policy, where economic interests and national security considerations increasingly intersect.

As discussions continue, the outcome of these deliberations could have far-reaching implications for the future of AI technology and its role in global economic dynamics, according to Reuters.

AI Uncovers $163K in Fraudulent Medical Bill Charges

A man successfully reduced a hospital bill by over $100,000 using AI tools to identify billing errors, highlighting the potential of technology in managing medical expenses.

In a remarkable case, a man utilized an AI chatbot to significantly reduce a hospital bill following his brother-in-law’s tragic heart attack. The initial bill for just four hours of emergency care totaled an astonishing $195,628. However, before his sister-in-law could pay, he urged her to wait and requested an itemized bill that included CPT codes—the standardized billing codes used by hospitals.

After receiving the itemized bill, he input the information into Claude, an AI chatbot. Within minutes, Claude identified numerous discrepancies, including duplicate charges, services billed as “inpatient” despite the patient never being admitted, and supply costs inflated by 500% to 2,300% above Medicare rates. Additionally, there were charges for procedures that had not occurred. To ensure accuracy, he cross-checked the findings with ChatGPT, which corroborated Claude’s results.

Armed with this information, he drafted a six-page letter detailing each violation. As a result, the hospital agreed to reduce the bill to $33,000, marking an impressive 83% decrease—all achieved without any medical training and with the help of a $20 app.

This story, while extraordinary, is not as isolated as it may seem. The Medical Billing Advocates of America estimates that approximately 75% of medical bills contain errors. On average, hospital bills exceeding $10,000 have around $1,300 in mistakes. Alarmingly, less than 1% of denied insurance claims are ever appealed, indicating that many patients may be unaware of their rights and the potential for errors in their bills.

AI technology is transforming the way patients can approach their medical billing disputes. With AI tools, individuals no longer need an extensive understanding of CPT codes or a background in medical billing to challenge their bills effectively. The process is straightforward:

First, contact your healthcare provider and request an itemized bill that includes CPT codes. It is important to ask for the full line-by-line breakdown rather than a summary, as patients are legally entitled to this information.

Next, open an AI tool such as ChatGPT, Claude, Grok, or Gemini (free versions are available) and paste the following request:

“I’m pasting my itemized medical bill below. Please: (1) Explain every charge in plain English, (2) Flag any duplicate or suspicious charges, (3) Compare each charge to average costs, (4) Identify billing code errors or bundling violations, and (5) Draft a dispute letter I can send to the billing department. Here’s my bill:”

After pasting your bill, the AI will analyze each line and highlight any discrepancies or errors it identifies.

If the AI uncovers mistakes—something that is likely—contact the billing department and ask to speak with a supervisor. Be sure to reference the specific codes and findings from your AI analysis. Hospitals are often willing to resolve disputes when patients come prepared with detailed information.

For those looking for additional resources, Counterforce Health (counterforcehealth.org) is a free AI tool specifically designed to assist with insurance denial appeals and is worth bookmarking for future reference.

As the landscape of healthcare billing continues to evolve, it is crucial for patients to take a proactive approach in reviewing their medical bills. Utilizing AI tools can empower individuals to challenge inaccuracies and potentially save significant amounts of money.

In a world where discussions about AI are prevalent, practical applications like this demonstrate how technology can be harnessed to address real-life challenges. For those seeking further insights into leveraging AI effectively, consider subscribing to the free newsletter, Splash of AI, which offers weekly tips and tools designed to simplify the use of technology in everyday life.

Sharing this information with someone who is grappling with a confusing medical bill could lead to substantial savings. It takes less time than brewing a cup of coffee and could save hundreds or even thousands of dollars.

Kim Komando, a trusted voice in technology, provides straightforward advice without the jargon. Her national radio show, available on over 500 stations, along with a free daily newsletter, YouTube content, and podcasts, offers valuable insights for navigating the tech landscape.

For more information, visit Komando.com.

According to Fox News, the integration of AI in managing medical bills is becoming an essential tool for patients seeking to rectify billing errors.

Shreya Parchure Uses AI to Aid Stroke Survivors in Speech Recovery

Shreya Parchure, an Indian American doctoral student, is pioneering an AI tool to personalize speech therapy for stroke survivors, enhancing recovery prospects for those affected by post-stroke aphasia.

Shreya Parchure, an Indian American researcher and doctoral student at the University of Pennsylvania, is making significant strides in the field of speech therapy for stroke survivors. Her innovative approach utilizes artificial intelligence (AI) to personalize treatment for individuals suffering from post-stroke aphasia, a condition that impairs the ability to understand or produce speech and affects approximately one-third of stroke survivors.

Growing up across two continents, Parchure developed a deep appreciation for the importance of language in enhancing quality of life. Her clinical rotations in a neurocritical care unit further solidified her commitment to advancing research and care for patients with aphasia. During her interactions with patients, she witnessed firsthand the profound impact that speech therapy can have on recovery. One patient, who initially struggled to speak, gradually regained her ability to communicate through dedicated therapy. “She was overjoyed,” Parchure recalls, highlighting how progress in speech therapy can instill hope in patients.

Traditional speech therapies for post-stroke aphasia often follow standardized protocols. However, Parchure and her team at the Laboratory for Cognition and Neural Stimulation (LCNS) are exploring the potential of “explainable AI.” This set of machine learning methods focuses on providing clear rationales behind AI-generated results, enabling healthcare providers to interpret and trust the recommendations made by the technology.

While some AI models have utilized neuroimaging and the duration since a stroke to assess aphasia severity, Parchure’s research expands on these methods by incorporating how language is formed and processed in the brain. “Explainable AI can integrate clinically available data—such as age, education, or the size of a stroke—with the linguistic difficulty of words,” she explains. This multifaceted approach allows the AI model to predict recovery timelines and suggest tailored treatments based on individual patient circumstances.

“When we have an AI making a prediction, we really want to know why,” Parchure emphasizes. She has leveraged speech samples from patients with post-stroke aphasia to train an explainable AI algorithm, testing its ability to account for various language tasks and make recovery predictions based on a diverse array of clinically relevant information. The tool also considers personal attributes, such as the size of the stroke and the level of social support available to the patient.

“Incorporating language into the fold adds a new layer of considering human and brain complexity,” Parchure notes. The explainable AI tool can predict speech performance on a word-by-word basis, which can help clinicians identify the underlying factors affecting a patient’s speech abilities. This granularity informs more nuanced treatment plans and recovery predictions.

“It’ll help tailor speech therapy for where exactly people are having trouble,” Parchure states. “We can really meet patients where they are in a more personalized manner.” To facilitate this, Parchure and her colleagues have developed an AI-powered application for use in both clinical and research settings. A particularly innovative aspect of this research is the creation of a “digital twin” for each patient, which serves as a predictive tool for language recovery.

The simulated “twin” allows for a comparative analysis of how a patient may respond to different treatments, enhancing the efficiency of clinical trials by enabling researchers to compare projected outcomes with actual recovery results. “The goal of my MD-PhD training has been to translate advances in research in a way that will benefit patients,” Parchure explains. Her work has already garnered recognition, including the Best Poster award in Translational Research at the 2025 PSOM Student Research Symposium.

Looking ahead, Parchure envisions a future where AI plays a crucial role in personalizing speech therapy, ultimately helping stroke survivors with aphasia reconnect with the joy of language. “Over the next decade, I believe we will see significant advancements in this area,” she concludes.

According to Penn Today, Parchure’s research represents a promising development in the intersection of technology and healthcare, offering hope to countless individuals affected by stroke.

Google Uses AI to Decode Dolphin Communication

Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate human interaction with these intelligent marine mammals.

Google is embarking on an ambitious project to harness artificial intelligence (AI) in order to decode the complex communication of dolphins, with the ultimate goal of enabling humans to converse with these intelligent creatures.

Dolphins have long been celebrated for their remarkable intelligence, emotional depth, and social interactions with humans. In collaboration with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP), a Florida-based non-profit that has dedicated over 40 years to studying and recording dolphin sounds, Google is developing a new AI model named DolphinGemma.

The WDP has been instrumental in correlating different types of dolphin sounds with specific behavioral contexts. For example, signature whistles are often used by mothers to reunite with their calves, while burst pulse “squawks” are typically observed during aggressive encounters among dolphins. Additionally, “click” sounds are frequently employed during courtship or when dolphins are pursuing sharks.

Utilizing the extensive data collected by the WDP, Google has created DolphinGemma, which builds upon its existing lightweight AI model known as Gemma. This innovative model is designed to analyze the vast library of dolphin vocalizations, detecting patterns, structures, and potential meanings behind their communications.

Over time, DolphinGemma aims to categorize dolphin sounds in a manner akin to human language, organizing them into what could resemble words, sentences, or expressions. According to a blog post by Google, “By identifying recurring sound patterns, clusters, and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins’ natural communication—a task previously requiring immense human effort.”

The project also envisions the creation of a shared vocabulary between dolphins and humans. By augmenting the identified sound patterns with synthetic sounds that refer to objects dolphins enjoy, researchers hope to establish a basis for interactive communication.

DolphinGemma employs advanced audio recording technology from Google’s Pixel phones, which enables the capture of high-quality sound recordings of dolphin vocalizations. This technology is capable of filtering out background noise, such as waves, boat engines, and underwater static, ensuring that the AI model receives clear audio data. Researchers emphasize that clean recordings are crucial for the effectiveness of AI models like DolphinGemma, as noisy data can lead to confusion.

Google plans to release DolphinGemma as an open model this summer, allowing researchers worldwide to utilize and adapt it for their own studies. Although the model has been primarily trained on Atlantic spotted dolphins, it has the potential to assist in the study of other species, such as bottlenose or spinner dolphins, with some adjustments.

In the words of Google, “By providing tools like DolphinGemma, we hope to give researchers worldwide the means to mine their own acoustic datasets, accelerate the search for patterns, and collectively deepen our understanding of these intelligent marine mammals.”

As this groundbreaking project unfolds, it holds the promise of not only enhancing our understanding of dolphin communication but also fostering a deeper connection between humans and these remarkable creatures.

According to Google, the advancements made through DolphinGemma could pave the way for unprecedented interactions with dolphins, enriching both scientific knowledge and human experience.

Indian-American Researchers Launch AI Legislation Tracking Portal

Researchers at Brown University, led by Indian American professor Suresh Venkatasubramanian, have launched a portal to track and analyze pending AI legislation across the United States.

A team of researchers from Brown University, under the leadership of Indian American professor Suresh Venkatasubramanian, has unveiled a new tool designed to track and analyze pending artificial intelligence (AI) legislation at both the federal and state levels in the United States. This initiative aims to address the rapidly evolving landscape of AI technologies and their regulation.

The CNTR AISLE Portal serves as a public database that aggregates information on AI legislation currently pending across all 50 states and at the federal level. It also provides in-depth analyses conducted by trained evaluators, detailing the various aspects of AI policy that these bills encompass.

Developed by a collaborative team of faculty, students, and staff at the Center for Technological Responsibility, Reimagination and Redesign (CNTR), the portal is a significant step toward enhancing public understanding of AI legislation. Venkatasubramanian, who is a professor of computer science and data science at Brown, emphasized the importance of this tool in the context of the growing number of AI-related bills introduced in the U.S. “Over the last three years, over 1,000 AI-related bills have been introduced in the U.S.,” the AISLE team noted at the launch. “With AISLE, we will help the public, journalists, researchers, and policymakers identify key policy trends and assess the maturity of these proposals.”

The AISLE Portal features a comprehensive bill library that compiles all AI-related legislation from a larger legislative database known as LegiScan. A subset of these bills has been evaluated by the AISLE policy team, which consists of 17 undergraduate students and five graduate students trained to assess legislation using the AISLE framework.

This framework includes a set of 159 questions designed to evaluate the extent to which each bill pertains to six general categories: accountability and transparency, data protection, bias and discrimination, education, synthetic content, and the labor force. For each bill assessed, the portal provides a “bill profile” that summarizes its content according to the AISLE framework.

Venkatasubramanian highlighted the team’s commitment to developing objective standards for evaluating legislation. “The goal here is not for us to say which bills we think are good and which ones are bad,” he explained. “Instead, we want to provide an easily digestible format for people to see what kinds of topics each bill covers and better understand where policymakers are in terms of addressing developments in AI.”

As of now, the team has evaluated approximately 100 bills, with plans to continue adding analyses on a rolling basis. Their ultimate goal is to evaluate enough legislation to identify large-scale trends in AI governance and legislation.

“With the analysis data that AISLE has provided, it is possible to understand which topics come in and out of the spotlight in each year’s legislative session, such as the rise in attention paid to the consequences of AI-generated synthetic content,” Venkatasubramanian noted. “We were also able to analyze similarities between bills to understand how ideas spread and diffuse across different states, and how ‘template’ bills influence how legislators draft legislation.”

The CNTR AISLE project is still in its early stages, with plans to introduce new features to the portal in the coming weeks. As legislative sessions for 2026 commence across the country, the team hopes that the portal will prove beneficial to a diverse range of users, including policymakers, journalists, and the general public.

“When we started work on AISLE, we hoped that the system we were building would be useful to policymakers, the press, and the public,” Venkatasubramanian said. “But as our team has grown, and as the work has developed, I’ve come to realize how invaluable AISLE is as an educational experience for the many students in technical and non-technical disciplines interested in AI policy. It has also become clear that AISLE lays the foundation for long-term scholarly research on how efforts to shape this critical and transformative technology are evolving over time.”

Venkatasubramanian has an impressive background, having served as the Assistant Director for Science and Justice in the White House Office of Science and Technology Policy during the Biden-Harris administration, where he co-authored the Blueprint for an AI Bill of Rights. He has also received several accolades for his research, including a CAREER award from the National Science Foundation for his work in the geometry of probability, a test-of-time award at ICDE 2017 for his contributions to privacy, and a KAIS Journal award for his work on auditing black-box models.

As the CNTR AISLE project continues to evolve, it promises to be a vital resource in understanding the legislative landscape surrounding AI technologies in the United States, fostering informed discussions and decisions about the future of AI policy.

According to The American Bazaar, the launch of the AISLE Portal marks a significant advancement in the effort to track and analyze AI legislation nationwide.

Data Breach at Figure Exposes Nearly One Million Accounts

Nearly 1 million accounts were compromised in a data breach at Figure Technology Solutions, exposing sensitive personal information due to a social engineering attack.

In a significant data breach, hackers have exposed personal information from 967,200 accounts at Figure Technology Solutions, a blockchain-focused fintech lender. The compromised data includes names, addresses, email addresses, and dates of birth.

For those who have applied for a loan online, the reality of sharing personal information can be alarming. Your name, email, date of birth, and even your home address may now be circulating on dark web forums. This is the unfortunate situation for nearly 1 million individuals following the breach at Figure Technology Solutions, which was founded in 2018 and utilizes the Provenance blockchain for lending, borrowing, and securities trading.

Figure claims to have unlocked over $22 billion in home equity through partnerships with banks, credit unions, fintechs, and home improvement companies. However, behind the scenes, a different story unfolded as attackers executed a social engineering attack to gain access to sensitive data.

According to breach notification data shared by Have I Been Pwned, the leaked information includes more than 900,000 unique email addresses, along with names, phone numbers, physical addresses, and dates of birth. This trove of personal data presents a significant opportunity for identity thieves.

A spokesperson for Figure Technology Solutions explained that the breach resulted from an employee being socially engineered into providing access. “We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,” the spokesperson stated. “We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate. We are also implementing additional safeguards and training to further strengthen our defenses. We are offering complimentary credit monitoring to all individuals who receive a notice. We continuously monitor accounts and have strong safeguards in place to protect customers’ funds and accounts.”

While blockchain technology is often associated with security and invulnerability, this incident underscores that attackers can exploit human vulnerabilities rather than breaking through cryptographic defenses. Groups like ShinyHunters have been linked to this breach, reportedly claiming responsibility and posting 2.5GB of data tied to thousands of loan applicants on the dark web.

In recent weeks, ShinyHunters has also claimed responsibility for breaches involving other companies, including Canada Goose, Panera Bread, and SoundCloud. Although not every case is connected, security researchers have noted a concerning trend where attackers impersonate IT support, create urgency, and direct employees to fake login portals that closely resemble legitimate ones. Once employees enter their credentials, including multi-factor authentication codes, attackers can gain access to single sign-on systems linked to major platforms like Microsoft and Google. This can lead to a cascade of compromised accounts and internal systems.

The implications of the Figure data breach are significant. If your information was part of the breach, criminals now possess enough detail to craft convincing phishing emails or phone scams. They can reference your real name and address, potentially impersonating a lender or bank regarding your application.

Even if you have never applied for a loan with Figure, this incident highlights a broader issue: no platform is immune to human error. Social engineering works by targeting trust rather than technology. While Figure promotes itself as a blockchain-native company, the reality is that blockchain technology does not protect against well-crafted phone calls or social manipulation.

As financial services increasingly move online, the attack surface for potential breaches expands. Loan applications, identity verification tools, and cloud-based systems offer convenience but also create new vulnerabilities.

To protect yourself following the Figure data breach, it is essential to take proactive steps. While you cannot control how companies secure their systems, you can manage your response. Start by checking whether your email address appears in the exposed dataset. You can do this by visiting Have I Been Pwned and entering your email address to see if your information has been compromised.

Additionally, be cautious of unexpected calls regarding your accounts. If someone pressures you to act immediately, it is advisable to hang up and contact the company directly using a number from its official website.

The Figure data breach serves as a stark reminder that technology alone cannot safeguard sensitive information. A single employee tricked into revealing credentials can expose hundreds of thousands of individuals. This incident is not a failure of blockchain technology but rather a failure of trust.

If your data was involved in the breach, it is crucial to take action now. Even if it was not, this incident should serve as a wake-up call. Your personal information holds significant value, and criminals are aware of this. Companies must also recognize the importance of investing in employee training and security measures to prevent such breaches in the future.

As we navigate an increasingly digital landscape, the question remains: are companies doing enough to protect sensitive information, or are they relying too heavily on technology alone? This breach raises critical concerns about the adequacy of current security practices and the need for a more comprehensive approach to safeguarding personal data.

For further insights and updates on cybersecurity, visit CyberGuy.

US Supreme Court Declines Review of AI-Generated Art Copyright Case

The U.S. Supreme Court has opted not to address the copyright eligibility of art created by artificial intelligence, leaving lower court decisions intact.

The U.S. Supreme Court declined on Monday to consider whether art generated by artificial intelligence (AI) can be copyrighted under U.S. law. This decision comes in response to a case involving Stephen Thaler, a computer scientist from Missouri, who was denied copyright protection for a piece of visual art created by his AI technology.

Thaler had approached the Supreme Court after lower courts upheld a ruling from the U.S. Copyright Office, which stated that works produced by AI are ineligible for copyright protection due to the absence of a human creator. Thaler, based in St. Charles, Missouri, applied for federal copyright registration in 2018 for his artwork titled “A Recent Entrance to Paradise.” The piece depicts train tracks leading into a portal, surrounded by vibrant green and purple plant imagery.

In 2022, Thaler’s application was rejected on the grounds that copyright law requires a human author for creative works. The Supreme Court’s refusal to hear the case means that this decision remains in effect.

The Trump administration had previously urged the Supreme Court not to take up Thaler’s appeal. The Copyright Office has also denied copyright requests from other artists seeking protection for images generated with the AI platform Midjourney. Unlike Thaler, these artists claimed they deserved copyright for images they created with AI assistance, while Thaler argued that his AI system independently generated “A Recent Entrance to Paradise.”

A federal judge in Washington upheld the Copyright Office’s decision in Thaler’s case in 2023, emphasizing that human authorship is a fundamental requirement for copyright eligibility. This ruling was later affirmed by the U.S. Court of Appeals for the District of Columbia Circuit in 2025.

Thaler’s legal team expressed concern over the implications of the Copyright Office’s stance, stating, “Even if it later overturns the Copyright Office’s test in another case, it will be too late. The Copyright Office will have irreversibly and negatively impacted AI development and use in the creative industry during critically important years.”

The administration reiterated its position, noting that while the Copyright Act does not explicitly define the term “author,” various provisions indicate that it refers to a human rather than a machine.

This is not the first time the Supreme Court has declined to address issues surrounding AI and intellectual property. Thaler previously sought the Court’s intervention in a separate case regarding whether AI-generated inventions could qualify for U.S. patent protection. His patent applications were similarly rejected by the U.S. Patent and Trademark Office on grounds consistent with those applied to his copyright claims.

The Supreme Court’s decision not to engage with the complexities of AI-generated art and its copyright implications leaves significant questions unanswered, particularly as AI technology continues to evolve and permeate various creative fields.

As the debate over AI and intellectual property rights continues, the implications of these rulings may have lasting effects on artists, technologists, and the broader creative industry.

According to The American Bazaar, the Supreme Court’s decision underscores the ongoing challenges faced by creators and innovators in navigating the intersection of technology and copyright law.

Iranian Networks Experience Disruptions Amid Airstrikes, Highlighting Digital Conflict Evolution

A recent cyberattack during airstrikes on Iran underscores the increasing importance of digital warfare in modern conflicts, revealing vulnerabilities in global networks and offering critical cybersecurity lessons.

A significant cyberattack coincided with airstrikes on Iran, illustrating the evolving nature of warfare where digital conflicts play a crucial role. On February 28, 2026, during Operation Roar of the Lion, fighter jets and cruise missiles targeted Iranian Revolutionary Guard command centers. Simultaneously, a parallel cyber offensive reportedly unfolded, resulting in widespread disruptions across the nation.

As missiles rained down, Iran experienced a near-total digital blackout. Key media platforms and official news sites went offline, while government digital services and local applications failed in major cities. According to NetBlocks, a global internet monitoring organization, internet traffic in Iran plummeted to just 4 percent of normal levels, indicating either a state-ordered shutdown or a large-scale cyberattack aimed at crippling critical infrastructure.

Western intelligence sources later suggested that the cyber offensive was designed to disrupt the command and control systems of the Islamic Revolutionary Guard Corps (IRGC) and hinder their ability to coordinate counterattacks. This incident serves as a stark reminder that modern warfare increasingly intertwines airstrikes with digital assaults, creating repercussions that extend far beyond the battlefield.

Reports indicated widespread outages throughout Iran, with major news outlets such as the state-run IRNA going offline. Tasnim, a semi-official news agency aligned with the IRGC, even displayed subversive messages targeting Supreme Leader Ali Khamenei. The IRGC, which plays a pivotal role in Iran’s national security and regional operations, faced significant operational challenges as local apps and government services failed in cities like Tehran, Isfahan, and Shiraz.

This was not merely a case of a single website being defaced; the attack appeared systemic. Electronic warfare reportedly disrupted navigation and communication systems, while distributed denial of service (DDoS) attacks overwhelmed networks with excessive traffic, rendering them inoperable. Deep intrusions targeted critical sectors such as energy and aviation, further exacerbating the crisis. Even Iran’s isolated national internet struggled under the pressure.

For a regime that tightly controls information, losing digital command poses both operational and political risks. Cyber operations can achieve objectives without the immediate loss of life, allowing for disruption without triggering full-scale war—a vital consideration in a region where escalation can occur rapidly. Historically, Iran has demonstrated an understanding of this strategy, having previously targeted U.S. financial institutions and Saudi Aramco in cyberattacks between 2012 and 2014.

Following Israeli strikes in 2025, cyberattacks targeting Israel surged dramatically within days. Cyber retaliation provides leaders with a means to respond while minimizing direct military confrontation, thereby gaining leverage in negotiations without crossing critical thresholds.

However, there is a significant risk involved. Each cyber strike carries the potential for miscalculation, and damage to critical infrastructure can quickly escalate into real-world consequences. If the recent blackout and airstrikes mark a turning point, Tehran has several options, none of which are straightforward. Cyber retaliation remains one of Iran’s most adaptable tools, ranging from disruptive attacks to influence campaigns that pressure critical services.

Experts warn that U.S. cyber defenses and the private sector may face sustained challenges in the wake of these events. Iran has previously utilized drones and electronic interference as signals, with analysts noting the potential for jamming, spoofing, and harassment of unmanned systems to raise costs without directly targeting personnel.

The risks are escalating. An official from an EU naval mission reported that IRGC radio transmissions warned ships against passage through the Strait of Hormuz. Greece has advised vessels to avoid high-risk routes, citing concerns about electronic interference that could disrupt navigation. Insurers are already adjusting their policies, with reports of war-risk coverage being canceled or significantly increased.

Iran has historically collaborated with allied forces and militias in the region, and some of these groups may escalate attacks on U.S. interests or allied partners in retaliation, further widening the conflict without direct state-to-state engagement. While missile strikes remain a high-impact option, they also increase the likelihood of rapid escalation. Recent analyses suggest that Iran may use missile strikes as a signaling tool, particularly if its leadership feels cornered.

The uncomfortable reality is that neither Washington nor Tehran likely desires a full-scale regional war. In such moments, military strikes rarely occur in isolation; they are often accompanied by diplomatic efforts. Leaders send signals, apply pressure, and attempt to leave room for negotiations. However, escalation can gain momentum quickly. Each missile fired alters the equation, and each casualty raises the stakes, making it increasingly difficult to de-escalate.

Fear and pride play significant roles in these dynamics, as domestic audiences demand displays of strength. This pressure can lead to limited strikes spiraling into larger conflicts. The recent events highlight a broader trend: nation-states are increasingly pairing kinetic strikes with digital offensives. Cyberattacks can blind communications, freeze infrastructure, and disrupt financial systems long before the first explosion is registered.

This reality is crucial for businesses and individuals alike. Modern conflicts do not remain confined to battlefields; supply chains, energy grids, and online platforms can all feel the ripple effects. The blackout in Iran serves as a reminder that digital resilience has become a national security issue. When a country’s internet can drop to just 4 percent of normal traffic within hours, it underscores the rapid escalation potential of cyber conflicts. Even disruptions occurring overseas can have far-reaching consequences for interconnected global networks.

While geopolitics may be beyond individual control, personal digital hygiene can be managed. Practical steps to reduce risk during heightened cyber activity include installing strong antivirus software, keeping devices updated, using unique passwords stored in reputable password managers, enabling two-factor authentication, and being cautious with urgent headlines or alerts about international conflicts.

The reported cyber blackout in Iran may signal a new chapter in modern conflict. While jets and missiles remain significant, the importance of servers, satellites, and code cannot be overlooked. Leaders may attempt to contain damage while demonstrating strength, but history shows how quickly plans can unravel under pressure. Today, warfare operates on electricity and bandwidth as much as it does on fuel and ammunition. When networks go dark, the repercussions extend far beyond the battlefield, affecting banking systems, airports, hospitals, and personal devices.

This moment serves as a crucial reminder: if an entire nation’s digital systems can be disrupted in hours, how prepared is your community for a similar event? The implications of these developments are profound and warrant careful consideration.

According to Source Name.

Google Discontinues Dark Web Monitoring Service: What You Need to Know

Google has discontinued its Dark Web Report feature, which previously scanned for personal information breaches, leaving users to rely on alternative security tools for monitoring their data exposure.

Google has officially discontinued its Dark Web Report feature, a free service that once scanned known dark web breach dumps for personal information associated with users’ Google accounts. This tool provided notifications when email addresses and other identifiers appeared in leaked datasets.

According to Google’s support page, the dark web scanning ceased on January 15, 2026, with the reporting function removed entirely on February 16, 2026. As a result, users can no longer access this feature. The company stated that this decision reflects a shift toward security tools that offer clearer guidance after exposure, rather than standalone scan alerts.

For those who previously relied on the dark web scan as an early warning system for leaked data, this change removes a significant source of information. The Dark Web Report functioned as a basic exposure scanner, checking whether personal information linked to a Google account had surfaced in known breach collections circulating on the dark web.

When a match was found, users received a notification detailing the type of data that appeared in a leak. This could include an email address, phone number, date of birth, or other identifying details commonly harvested during large-scale hacks. However, the report did not display stolen credentials or provide access to the leaked database itself, nor did it trace the origin of the compromise beyond referencing the breached service when available.

After receiving an alert, users were responsible for taking the next steps. Google recommended actions such as changing passwords, enabling stronger authentication methods, and reviewing account security settings. With the removal of the tool, the automated breach check tied directly to a Google account is no longer available.

Google now directs users to its Security Checkup, a dashboard that scans accounts for weak settings and unusual sign-in activity. Additionally, its built-in Password Manager includes a Password Checkup feature that scans saved credentials against known breach databases and prompts users to change exposed passwords. Google also supports passkeys and two-factor verification to enhance account security.

The Results About You tool allows users to search for personal information in Google Search and submit removal requests for certain publicly indexed details. However, once personal information is compromised, it often ends up far beyond the initial breach. Stolen credentials and identity data are regularly trafficked on underground platforms where buyers can search for information tied to real individuals.

The BidenCash dark web marketplace was taken down by U.S. authorities in June 2025, with the Justice Department confirming that the platform sold stolen personal information and credit card data. These illicit markets operate with a level of organization comparable to legitimate online stores, offering search tools and bulk data sets that can be used to target online accounts. This makes credential stuffing easier, as attackers test leaked passwords across multiple services to gain unauthorized access.

A breach alert tied to a dark web scan indicates a leak at a specific moment in time; it does not track whether that information has been sold to third parties or used in subsequent fraud attempts. For everyday users, this means that simply knowing their data appeared in a leak does not provide much actionable insight.

With Google’s dark web scan now discontinued, some individuals may consider dedicated identity protection services. Many of these services offer continuous monitoring of personally identifiable information and send alerts about changes to credit reports from all three major U.S. credit bureaus. This can include notifications about new inquiries, newly opened accounts, and monthly credit score updates.

Beyond credit monitoring, certain services track linked bank, credit card, and investment accounts for unusual activity. They may also monitor public records for changes to addresses or property titles and alert users if their information appears in those filings. Many providers include identity theft insurance to help cover eligible out-of-pocket recovery costs, with coverage limits varying by plan and provider.

While no service can prevent every form of identity theft, ongoing monitoring and recovery support can facilitate a quicker response if personal information is misused. Google’s decision to drop its Dark Web Report may seem minor, but it eliminates a tool that many users relied on for early warnings about data breaches. Although Google continues to offer Security Checkup, Password Checkup, passkeys, and two-step verification, none of these actively scan dark web breach dumps for users.

Stolen data does not simply vanish; criminals copy, sell, and reuse it. An alert may indicate a single moment of exposure, but ongoing identity theft monitoring is essential for maintaining awareness over time. With the removal of Google’s dark web monitoring feature, users must now decide whether to actively check their data exposure or assume that someone else is monitoring it for them.

For more insights on identity protection and security, visit CyberGuy.com.

Ex-Twitter CEO’s Firm Block Plans to Cut Workforce by Nearly 50% with AI

Jack Dorsey’s company Block plans to lay off 4,000 employees, nearly half of its workforce, citing increased productivity from artificial intelligence tools.

Block, the financial technology company founded by former Twitter CEO Jack Dorsey, has announced plans to lay off 4,000 of its 10,000 employees. This decision is attributed to advancements in artificial intelligence (AI) that have significantly enhanced productivity within the company.

In a letter to shareholders on Thursday, Dorsey emphasized the transformative impact of AI on business operations. “Intelligence tools have changed what it means to build and run a company,” he stated. “We’re already seeing it internally. A significantly smaller team, using the tools we’re building, can do more and do it better. And intelligence tool capabilities are compounding faster every week.”

Despite the substantial layoffs, Dorsey assured stakeholders that the decision was not a reflection of financial instability. He pointed out that Block had performed well, exceeding Wall Street expectations with a reported total revenue of $6.25 billion for the fourth quarter. In a post on X, he explained that he faced two options: to gradually reduce the workforce over an extended period or to act decisively in the present.

“Repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead,” Dorsey wrote.

During the earnings call, executives noted that Block had been increasingly integrating AI into its operations for several years. They indicated that some AI initiatives were nearing full implementation, while others were still in earlier stages of development. This announcement follows a previous round of layoffs earlier in February, which had already seen hundreds of workers let go.

The decision to reduce the workforce by nearly half has drawn comparisons to the drastic measures taken by Elon Musk when he acquired Twitter (now X) in November 2022, where he cut approximately 50% of the staff in a single move. Dorsey, a co-founder of Twitter, has had a complex relationship with Musk, initially supporting his acquisition but later suggesting that Musk “should have walked away.”

In addition to his role at Block, Dorsey has been involved in the development of Bluesky, a decentralized alternative to Twitter, and has expressed strong support for Bitcoin.

The layoffs at Block have reignited discussions about the broader implications of AI on employment. Tech leaders, including Anthropic CEO Dario Amodei and Meta CEO Mark Zuckerberg, have raised concerns about the potential negative effects of AI on the workforce. A recent report from the research firm Citrini, released on February 22, outlined a scenario where the growth of AI could adversely affect the overall economy.

Conversely, some industry figures have cautioned against hastily attributing layoffs to AI. OpenAI CEO Sam Altman has pointed out that some companies may be “AI washing,” or misleadingly linking unrelated layoffs to advancements in AI technology.

Critics on X have challenged Dorsey’s narrative regarding the layoffs at Block. One user highlighted that the company’s workforce had more than tripled from 3,900 to 12,500 employees between December 2019 and December 2022, during the tech boom fueled by the pandemic. “Unwinding less than half an insane COVID overhiring binge has much more to do with Jack Dorsey’s managerial incompetence than whether AI is going to take your job,” the post read.

Another commenter suggested that Block had created “two parallel company structures during COVID” and was now consolidating them, framing the layoffs as a management correction rather than a revolutionary shift driven by AI. This user predicted that more companies might use “AI restructuring” as a pretext for decisions that were already in the works.

The developments at Block reflect ongoing tensions in the tech industry regarding the role of AI in shaping the future of work and the management strategies employed by companies navigating these changes. As the conversation continues, the implications for employees and the economy remain a focal point of concern.

According to The American Bazaar, the situation at Block serves as a critical case study in the evolving landscape of technology and employment.

Amazon Discontinues Development of Blue Jay Warehouse Robot

Amazon has discontinued its Blue Jay warehouse robot program, raising questions about the scalability of advanced robotics in logistics.

Amazon has quietly ended its Blue Jay warehouse robot program just months after its initial unveiling, which aimed to enhance same-day delivery capabilities. The multi-armed, ceiling-mounted robot was introduced in October as a significant advancement in warehouse automation.

Despite the initial excitement surrounding Blue Jay, the program faced considerable challenges that ultimately led to its discontinuation. While the core technology behind Blue Jay will be integrated into other projects, the robot itself will no longer be developed.

This abrupt decision prompts a critical inquiry: If Amazon, one of the world’s leading logistics companies, cannot successfully implement a high-profile robot at scale, what implications does this have for the future of artificial intelligence (AI) in practical applications?

Blue Jay was not merely an upgrade to existing conveyor belt systems; it was designed to recognize and sort multiple packages simultaneously using advanced AI-powered perception models. Amazon claimed that the system was developed in under a year, a remarkable feat aimed at increasing package throughput while alleviating worker strain in fulfillment centers.

However, despite its promising design, Blue Jay encountered significant engineering and cost hurdles. The robot’s ceiling-mounted configuration required intricate installation and seamless integration into Amazon’s Local Vending Machine warehouses, which are designed as expansive, automated structures. This rigidity in design likely became a liability, as modifications would necessitate extensive reconfiguration of hardware and infrastructure, a process that is both time-consuming and costly.

As a result, several employees who were involved in the Blue Jay project have transitioned to other robotics initiatives within the company. Although the Blue Jay robot itself has been shelved, Amazon continues to explore new avenues for improving its warehouse systems, with the underlying technology informing future designs.

Looking ahead, Amazon is shifting its focus to a new warehouse architecture known as Orbital. Unlike the older Local Vending Machine model, Orbital is modular, allowing for quicker deployment in various layouts. This adaptability is crucial as retail landscapes evolve, with customers increasingly expecting same-day delivery from urban centers, local stores, and grocery outlets.

Orbital could enable Amazon to establish micro-fulfillment centers in proximity to retail locations, including Whole Foods, thereby enhancing its competitive edge against rivals like Walmart, which already boasts a robust grocery network.

In conjunction with Orbital, Amazon is also developing a new robotics system called Flex Cell. Unlike Blue Jay’s ceiling-mounted design, Flex Cell will operate on the floor, indicating a strategic shift towards smaller, more flexible automation solutions tailored to the unpredictable nature of local retail environments.

For regular Amazon customers, the immediate impact of these changes may be minimal, as same-day and next-day delivery options remain a priority. However, the long-term implications of Amazon’s evolving robotics strategy could significantly influence order fulfillment speed, pricing, and the operational dynamics of local warehouses.

If Orbital proves successful, it could facilitate faster and more efficient deliveries. Conversely, if it encounters difficulties, the expansion of same-day delivery services could slow down or become more costly. This scenario underscores a broader truth about AI: while software can adapt rapidly through code updates, physical robots face challenges that require substantial investment and time to overcome.

The discontinuation of Blue Jay highlights a growing divide in the tech industry. While software-based AI is advancing at a remarkable pace, hardware development remains fraught with complexities. Robots must navigate real-world challenges such as gravity, friction, and unpredictable human interactions, where each error carries tangible costs.

Amazon’s decision to shelve Blue Jay does not signify a retreat from robotics; rather, it represents a recalibration of its approach. The company is betting on the success of modular, flexible systems over large, integrated machines. This strategic pivot could shape the future of e-commerce logistics.

Ultimately, the promise of faster delivery, improved availability, and enhanced local convenience remains intact for consumers. However, the journey to realize these ambitions involves navigating the intricate balance between AI aspirations and the constraints of physical reality.

As Amazon grapples with the challenges of implementing advanced robotics at scale, it raises an important question: How much of the AI revolution is still more vision than reality? This ongoing dialogue will shape the future of technology and logistics in the years to come, according to CyberGuy.

Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a face-mounted electronic tattoo, or “e-tattoo,” designed to monitor mental workload in high-stress professions using advanced brainwave technology.

In an innovative study published in the journal Device, scientists have introduced a groundbreaking electronic tattoo device, referred to as an “e-tattoo,” that can help individuals in high-pressure work environments monitor their brain activity and cognitive performance.

The research team, led by Dr. Nanshu Lu from the University of Texas at Austin, emphasizes that mental workload is a crucial element in human-in-the-loop systems, significantly affecting cognitive performance and decision-making processes. This device aims to provide a more cost-effective and user-friendly method for tracking mental workload, particularly in demanding fields such as aviation, healthcare, and emergency response.

Dr. Lu noted that the e-tattoo could be particularly beneficial for professionals like pilots, air traffic controllers, doctors, and emergency dispatchers, who often operate under intense stress. Additionally, the technology could enhance training and performance for emergency room doctors and operators of robots and drones.

The primary objective of the study was to develop a means of measuring cognitive fatigue among individuals in high-stakes careers. The e-tattoo is designed to be temporarily affixed to the forehead and is significantly smaller than existing monitoring devices.

Utilizing electroencephalogram (EEG) and electrooculogram (EOG) technologies, the e-tattoo measures both brain waves and eye movements. Traditional EEG and EOG equipment tends to be bulky and expensive, but the e-tattoo presents a compact and affordable alternative.

Dr. Lu explained that the device is designed to be as thin and flexible as a temporary tattoo sticker, allowing for comfortable wear while providing accurate readings. She stated, “Human mental workload is a crucial factor in the fields of human-machine interaction and ergonomics due to its direct impact on human cognitive performance.”

The study involved six participants who were tasked with identifying letters displayed on a screen. Each letter appeared sequentially at various locations, and participants were instructed to click a mouse when they recognized either the letter or its position from a previously shown set. The difficulty of the tasks increased progressively, and the researchers observed shifts in brainwave activity that indicated a heightened mental workload as challenges intensified.

The e-tattoo consists of a battery pack, reusable chips, and a disposable sensor, making it a practical solution for real-time monitoring. Currently, the device is a lab prototype, with a production cost of approximately $200.

Dr. Lu highlighted that further development is necessary before the e-tattoo can be commercialized. This includes enhancing the device’s ability to decode mental workload in real-time and validating its effectiveness with a larger group of participants in more realistic settings.

As the demand for effective stress management tools in high-pressure jobs continues to grow, the e-tattoo represents a promising advancement in cognitive performance monitoring, potentially transforming how professionals manage their mental workload.

According to Fox News, the e-tattoo could pave the way for improved performance and training in various high-stakes occupations.

Sheel Dodani Receives $100,000 Hackerman Award for Protein Research

Indian American scientist Sheel Dodani has been awarded the prestigious $100,000 Hackerman Award for her innovative research in protein technology aimed at enhancing human health and environmental sustainability.

Sheel Dodani, an Indian American scientist, has received the esteemed 2026 Norman Hackerman Award in Chemical Research from The Welch Foundation. This award, which includes a $100,000 prize and a bronze sculpture, recognizes her groundbreaking work in the field of engineered proteins, specifically their application as anion sensors in biological systems.

Dr. Dodani is an associate professor of chemistry and biochemistry at the University of Texas at Dallas. Her research has been described as “using creative and daring chemistry to engineer technologies” that significantly contribute to human health and environmental improvement. Fred Brazelton, chair and director of The Welch Foundation, praised her achievements, stating, “Dr. Dodani is using creative and daring chemistry to engineer technologies that can measure and manipulate anions in living systems for the betterment of human health and the environment.”

The Hackerman Award is named after the foundation’s former scientific advisory board chair and aims to honor the accomplishments of early-career chemical scientists in Texas who are committed to advancing the fundamental understanding of chemistry. The award not only highlights individual achievement but also underscores the importance of innovative research in the scientific community.

Dr. David Hyndman, dean of the School of Natural Sciences and Mathematics at UT Dallas, remarked on the significance of Dodani’s work, stating, “Sheel Dodani’s research is opening an important new window into the chemistry of life.”

Dodani’s research group has developed the first coherent suite of genetically engineered fluorescent proteins that serve as biosensors for inorganic anions. While much attention has been given to cations—positively charged particles that are crucial for biological processes—anions, or negatively charged particles, have not been as thoroughly explored. This gap in understanding is particularly notable given the vital role that anions play in various biological functions.

One prominent example of an anion is chloride, which is essential for regulating fluid balance, blood pressure, and pH levels in the human body. The biosensors developed by Dodani have revolutionized researchers’ ability to track and visualize the behavior and interactions of these biologically significant anions in real time within living systems.

By utilizing fluorescent biosensors, researchers can now observe how anions behave in cells, paving the way for new therapeutic avenues. This includes the potential identification of small molecules that could treat chloride channel dysfunctions associated with diseases such as cystic fibrosis.

Reflecting on her research journey, Dodani noted, “This work began with a fundamental question: How can we bind an anion in water?” She explained that her team turned to nature’s supramolecular machines—proteins—to find answers. Through protein engineering, they have unlocked new functionalities in fluorescent proteins that enable the observation of anion biology, which has traditionally been challenging to study directly in living cells.

Dodani expressed gratitude for the support from The Welch Foundation, stating, “The Welch Foundation gave us the opportunity to pursue this direction early on. At the time, there was no established framework for investigating anions in water, let alone in living systems. By integrating concepts from different disciplines, we have started to answer questions that were previously out of reach.”

The Welch Foundation plays a crucial role in providing resources that allow researchers like Dodani to take risks in their scientific inquiries. This support is vital for those who aim to tackle complex questions that could have significant implications for human health and the environment.

Born and raised in Plano, Texas, Dodani completed her Bachelor of Science in chemistry at UT Dallas. She then pursued her PhD at the University of California, Berkeley, followed by a postdoctoral fellowship at the California Institute of Technology. In 2016, she returned to UT Dallas as a faculty member in the School of Natural Sciences and Mathematics, where she continues to make impactful contributions to the field of chemistry.

According to The American Bazaar, Dodani’s innovative research not only enhances our understanding of anions but also holds promise for future advancements in medical and environmental applications.

Arvind KC Appointed to Lead Global Expansion Efforts at OpenAI

OpenAI has appointed Arvind KC, a former Google executive, as Chief People Officer to enhance talent acquisition and workplace culture amid the company’s rapid expansion.

OpenAI has announced the appointment of Arvind KC as its new Chief People Officer, marking a significant addition to the leadership team of one of the world’s most scrutinized artificial intelligence companies.

KC, who previously held executive roles at Google and Roblox, will oversee human resources and internal scaling efforts at OpenAI during a period of rapid growth in both headcount and global influence.

With a strong foundation in both technical and managerial disciplines, KC brings a unique perspective to the role. He earned a bachelor’s degree in chemical engineering from the University Institute of Chemical Technology (UICT) in Mumbai, India, a prestigious institution known for its rigorous engineering programs.

Following his education in India, KC moved to the United States to pursue an MBA with a focus on operations management from Santa Clara University. This combination of technical knowledge and strategic management has positioned him well for leadership roles in high-growth technology environments.

Throughout his career, KC has navigated the complexities of rapidly scaling organizations. Most recently, he served as Chief People and Systems Officer at Roblox, where he aligned workforce strategy with internal technical systems to support the company’s growth.

Before his tenure at Roblox, KC was a Vice President at Google, where he led global engineering teams. His experience in engineering-heavy roles at companies like Palantir and Facebook (now Meta) allows him to effectively communicate with the researchers and developers he will now manage.

In his new position at OpenAI, KC is tasked with humanizing the company’s rapid expansion, which is often viewed through the lens of its algorithms. His responsibilities will include overseeing global talent acquisition, employee development, and fostering a workplace culture that can withstand the scrutiny faced by the AI sector.

“Arvind’s experience leading global teams at some of the world’s most innovative companies will be invaluable as we continue to grow,” OpenAI stated, highlighting his proven track record in managing large-scale organizational transitions.

This appointment signals a maturation phase for the San Francisco-based firm as it transitions from a small research lab to a global commercial powerhouse. The emphasis on the “human” element of operations reflects a strategic priority for OpenAI as it seeks to attract and retain top talent in a competitive labor market.

KC is expected to bridge the gap between ambitious technical objectives and the everyday needs of a world-class workforce, ensuring that OpenAI remains an attractive destination for elite professionals.

According to The American Bazaar, this leadership change underscores OpenAI’s commitment to developing a robust organizational culture as it continues to expand its reach in the AI industry.

Apple Warns Users of Scam Emails Targeting App Passwords

A recent phishing scam impersonating Apple warns users of a fraudulent $2,990 PayPal charge, urging them to call a fake support number, prompting cybersecurity experts to issue warnings.

A new phishing scam targeting Apple users has emerged, featuring a deceptive email claiming that an app-specific password was generated for the recipient’s account. The email falsely states that the user authorized a $2,990.02 charge through PayPal and includes a confirmation number, urging the recipient to call a support number immediately. However, this message is a classic example of a phishing scam.

The email is designed to instill panic and urgency in recipients. It appears to be professionally crafted, using Apple branding and mentioning Apple Support. However, upon closer inspection, several red flags indicate that the message is not legitimate.

One of the most significant warning signs is the “To” field, which displays an email address that does not match the recipient’s actual Apple ID. Legitimate emails from Apple are sent directly to the email address associated with the user’s Apple ID. If the visible recipient address differs from yours, it is likely a mass-mailed or spoofed message, a common tactic used by scammers.

Scammers often use large sums of money, like the nearly $3,000 charge mentioned in this email, to provoke fear and prompt quick action from recipients. The goal is to create a sense of urgency that leads individuals to act without thinking critically about the situation.

The email also instructs recipients to call a specific phone number, which does not belong to Apple. Authentic Apple security communications typically direct users to log into their accounts directly rather than pressuring them to call an unfamiliar support line. If a recipient calls this number, they may be connected to a scammer who could extract personal information or financial details.

Additionally, the email contains links that appear to lead to official Apple resources, such as “Apple Account” and “Apple Support.” However, these links may be disguised, leading to malicious websites instead. It is crucial to avoid clicking on links in suspicious emails and instead navigate to official websites by typing the URL directly into a browser.

Another red flag is the mismatch between the email’s subject and its content. While the subject mentions an app-specific password, the body of the email suddenly shifts to discussing a PayPal transaction. This inconsistency is a common tactic used by scammers to heighten urgency and confusion.

The email begins with a generic greeting, “Dear Customer,” rather than addressing the recipient by name. This impersonal approach is typical of bulk phishing emails, which often lack the personalization found in legitimate communications from trusted companies.

Moreover, the email’s Reply-To field may show an address that appears to be from Apple, such as appleid-usen@email.apple.com. However, scammers can easily spoof sender information, making it look like the message is coming from a trusted source. Users should be cautious and evaluate all red flags collectively rather than relying solely on the sender’s address.

The language used in the email is also a telltale sign of a scam. Phrases like “You authorized a USD 2,990.02 payment to apple.com using PayPal” sound awkward and unnatural. Genuine Apple receipts typically reference specific products or subscriptions rather than vague payment notifications tied to password alerts.

Furthermore, the email may display a masked address or an unusual domain, such as relay.quickinvoicesus.com, which does not conform to standard Apple formatting. Legitimate Apple communications will reference the user’s Apple ID directly, not an unrelated invoice-style domain.

Scammers often create a sense of urgency by urging recipients to call immediately to report an unauthorized transaction. This tactic is a hallmark of phishing schemes, as legitimate companies encourage users to log in securely to their accounts rather than rushing them into calling a third-party number.

Once on the phone with a scammer, victims may be led to provide sensitive information or even financial details, resulting in losses that far exceed the fake $2,990 charge mentioned in the email.

If you receive an email of this nature, it is essential to take a moment to pause and assess the situation. Instead of clicking on links or calling numbers provided in the email, verify the details by visiting the official Apple and PayPal websites directly. If you did not generate an app-specific password and see no suspicious charges, you are likely safe.

To protect yourself from phishing scams, consider implementing a few smart habits. Enable two-factor authentication (2FA) on your Apple ID, PayPal, and email accounts. This additional layer of security can prevent unauthorized access even if someone guesses your password.

Always be cautious when an email urges you to call support or click on links. Instead, navigate directly to official websites by typing the addresses into your browser. Ensure that you have strong antivirus software installed on your devices, as it can help detect malicious links and block phishing sites.

Regularly update your software to fix vulnerabilities that attackers may exploit. Outdated software can make it easier for phishing and malware attacks to succeed. Additionally, avoid reusing passwords across different accounts, as this practice can put your entire digital life at risk if one account is compromised.

If you suspect that your email has been exposed in a data breach, consider using a password manager that includes a breach scanner to check for compromised credentials. Reducing the amount of personal information available online can also help decrease your risk of falling victim to phishing scams.

Lastly, report any suspicious emails to Apple at reportphishing@apple.com and mark them as phishing through your email provider. This action helps improve filters and protects others from becoming victims.

In the face of increasingly sophisticated phishing scams, it is vital to remain vigilant and informed. If you receive an email claiming to be from Apple regarding an app-specific password and a large PayPal charge, trust your instincts—it’s likely a scam. Always verify through official channels to protect your personal and financial information.

According to a PayPal spokesperson, “PayPal does not tolerate fraudulent activity, and we work hard to protect our customers from evolving phishing scams. We always encourage consumers to practice vigilance online and to learn how to spot the warning signs of common fraud.”

Astronauts Return to Earth After ISS Mission Rescues Stranded Crew

A NASA crew successfully splashed down in the Pacific Ocean after completing a mission to the International Space Station, marking the agency’s first Pacific landing in 50 years.

NASA astronauts Anne McClain and Nichole Ayers, along with international crew members Takuya Onishi from Japan and Kirill Peskov from Russia, returned to Earth on Saturday, splashing down in the Pacific Ocean off the coast of Southern California. The landing occurred at 11:33 a.m. ET in a SpaceX capsule, marking a significant milestone as it was NASA’s first Pacific splashdown in five decades.

The crew’s mission involved relieving two astronauts, Suni Williams and Butch Wilmore, who had been stranded aboard the International Space Station (ISS) for nine months. Their extended stay was due to issues with the Boeing Starliner capsule, which had experienced thruster problems and helium leaks. NASA ultimately deemed it too risky to return Williams and Wilmore in the Starliner, which flew back to Earth without a crew. Instead, the two astronauts returned home in a SpaceX capsule after their replacements arrived.

Wilmore announced his retirement from NASA earlier this week after a distinguished 25-year career. Reflecting on their mission, McClain expressed hopes that it would serve as a reminder of the power of collaboration and exploration, especially during challenging times on Earth. She shared her anticipation of enjoying some downtime upon her return, while her crewmates looked forward to indulging in hot showers and burgers.

This mission also marked a change for SpaceX, which opted to switch its splashdown locations from Florida to California to minimize the risk of debris falling on populated areas. After exiting the spacecraft, the crew underwent medical checks before being transported by helicopter to meet a NASA aircraft bound for Houston.

Steve Stich, manager of NASA’s Commercial Crew Program, expressed satisfaction with the mission’s outcome during a post-splashdown press conference. “Overall, the mission went great, glad to have the crew back,” he stated. “SpaceX did a great job of recovering the crew again on the West Coast.”

Dina Contella, deputy manager for NASA’s International Space Station program, echoed this sentiment, noting her happiness at seeing the Crew 10 team back on Earth. She remarked that the crew had orbited the Earth 2,368 times and traveled more than 63 million miles during their 146 days in space.

This successful mission underscores the ongoing collaboration between NASA and commercial partners like SpaceX, as they work together to advance human space exploration.

According to Fox News, the mission’s success highlights the resilience and adaptability of space travel in the modern era.

11 Indian-American Innovators Recognized in Forbes’ 250 Greatest Innovators

Forbes has recognized 11 Indian Americans in its “250 America’s Greatest Innovators” list, highlighting their significant contributions to technology and medicine as the nation celebrates its 250th anniversary.

Forbes recently unveiled its “250 America’s Greatest Innovators” list to commemorate the United States’ 250th anniversary, showcasing a diverse group of visionary founders and executives who are reshaping global technology and medicine. Among the honorees are 11 Indian Americans, whose groundbreaking work spans from the early days of the internet to the cutting-edge developments in generative AI.

Leading this distinguished group is Vinod Khosla, co-founder of Sun Microsystems and a prominent venture capitalist, who secured the No. 10 spot. Khosla is renowned for his “black swan” investing style, with early investments in OpenAI and green technology solidifying his reputation as a leading risk-taker in the industry.

Close behind Khosla are tech giants Satya Nadella and Sundar Pichai, who have been instrumental in “re-founding” Microsoft and Alphabet, respectively. Their leadership has pivoted these legacy companies toward an AI-first future, reflecting the transformative power of innovation in the tech landscape.

The Forbes list emphasizes that innovation is often a marathon rather than a sprint. Suma Krishnan, who ranks No. 127, has made significant strides in treating “butterfly skin” disease. She co-founded Krystal Biotech in her 50s to develop the first topical gene therapy, marking a pivotal moment in medical innovation.

Similarly, Jay Chaudhry, ranked No. 128, has been recognized for his pioneering work in “zero trust” cloud security at Zscaler, which has disrupted the traditional firewall industry and redefined security protocols in the digital age.

The Indian American diaspora continues to make substantial contributions to technical infrastructure. Neha Narkhede, co-founder of Confluent and now CEO of Oscilar, is celebrated at No. 155 for her work in real-time data streaming. At MIT, Sangeeta Bhatia, ranked No. 161, has been honored for her innovative approach to merging microchips with biology, revolutionizing drug testing methodologies.

The diversity of this group extends into the daily lives of millions. Aman Narang, who ranks No. 177, has transformed the restaurant industry with Toast’s management platform. Baiju Bhatt, at No. 183, has democratized retail investing through Robinhood and is now pivoting to space-based solar power with Aetherflux. Naval Ravikant, ranked No. 230, has broadened access to startup funding via AngelList, further contributing to the entrepreneurial ecosystem.

The final names on the list reflect a commitment to human equity and efficiency. Shiv Rao, ranked No. 235, has been recognized for his AI medical scribe, Abridge, which automates clinical documentation to alleviate physician burnout. Shan Sinha, at No. 202, has made significant contributions to data management and healthcare safety, while Shivani Siroya, ranked No. 238, has been lauded for her work with Tala, which utilizes mobile data to provide credit to the “unbanked” in emerging markets.

This impressive collection of 11 innovators underscores a robust pipeline of talent that has become essential to the American economy. Whether they began their journeys in a garage or now lead major conglomerates, these individuals have successfully transformed complex scientific and digital theories into everyday realities.

According to Forbes, the achievements of these innovators highlight the critical role that diverse perspectives play in driving progress and shaping the future.

Four Indian-American Researchers Selected as 2026 Sloan Research Fellows

Four Indian American researchers have been awarded the 2026 Sloan Research Fellowships, recognizing their contributions to science and innovation in their respective fields.

Four Indian American researchers have been named among the 126 recipients of the prestigious 2026 Sloan Research Fellowships. Aayush Jain, Arun Kumar Kuchibhotla, and Aditi Raghunathan from Carnegie Mellon University, along with Anand Natarajan from the Massachusetts Institute of Technology (MIT), have been honored for their exceptional research accomplishments.

The Sloan Research Fellowships, awarded annually by the Alfred P. Sloan Foundation, celebrate early-career researchers who demonstrate creativity and innovation in their fields. Each fellowship includes a two-year grant of $75,000, which can be utilized flexibly to support the fellow’s research initiatives.

Stacie Bloom, president and CEO of the Alfred P. Sloan Foundation, remarked, “The Sloan Research Fellows are among the most promising early-career researchers in the U.S. and Canada, already driving meaningful progress in their respective disciplines. We look forward to seeing how these exceptional scholars continue to unlock new scientific advancements, redefine their fields, and foster the well-being and knowledge of all.”

Aayush Jain serves as an assistant professor in the Computer Science Department at Carnegie Mellon University. His research focuses on theoretical and applied cryptography, particularly the mathematical foundations that ensure the security of modern cryptographic systems. Jain aims to identify new sources of computational hardness and strengthen the long-term security of encrypted computation, addressing critical gaps in post-quantum cryptography. Additionally, he is dedicated to training graduate students in foundational cryptographic theory.

Arun Kumar Kuchibhotla, an associate professor in the Department of Statistics and Data Science at Carnegie Mellon, tackles foundational challenges in statistical inference and predictive learning. His work has significant applications in machine learning and artificial intelligence, where he develops robust, “assumption-lean” frameworks for uncertainty quantification. Kuchibhotla’s research also contributes to financial time series forecasting and causal inference significance testing. He has pioneered “honest inference” procedures, such as the Hull-based Confidence Method (HulC), which maintain validity in high-dimensional and irregular settings where traditional methods often falter.

Aditi Raghunathan, also an assistant professor in the Computer Science Department at Carnegie Mellon, focuses on understanding the vulnerabilities of AI systems and developing models that are safe, accurate, and reliable in real-world applications. She leads the AI Reliability Lab, which is dedicated to creating trustworthy AI through rigorous analysis and principled methodologies. Raghunathan’s research has garnered recognition at prestigious conferences and plays a crucial role in promoting responsible AI system design and deployment.

Anand Natarajan, an associate professor in Electrical Engineering and Computer Science at MIT, is a principal investigator at the Computer Science and Artificial Intelligence Lab and the MIT-IBM Watson AI Lab. His research primarily revolves around quantum complexity theory, exploring the power of interactive proofs and arguments within a quantum framework. Natarajan’s work aims to evaluate the complexity of computational problems in quantum settings, assessing both the capabilities and the reliability of quantum computers. He holds a PhD in physics from MIT, along with an MS in computer science and a BS in physics from Stanford University. Before joining MIT in 2020, he was a postdoctoral researcher at the Institute for Quantum Information and Matter at Caltech.

The recognition of these four researchers underscores the significant contributions of Indian Americans in advancing scientific knowledge and innovation. Their work not only enhances their respective fields but also sets a foundation for future breakthroughs in technology and research.

According to The American Bazaar, the Sloan Research Fellowships continue to highlight the importance of supporting early-career scientists who are poised to make substantial impacts in their disciplines.

Indian-American Billionaire Vinod Khosla Criticizes Ro Khanna, Bernie Sanders on AI

Indian American billionaire Vinod Khosla criticized U.S. lawmakers Ro Khanna and Bernie Sanders for their warnings about artificial intelligence in a recent post on social media platform X.

Indian American billionaire Vinod Khosla has publicly expressed his discontent with U.S. lawmakers Ro Khanna and Bernie Sanders. In a recent post on X, Khosla launched a scathing critique of their warnings regarding the potential negative consequences of artificial intelligence (AI).

In his post, Khosla stated, “Bernie Sanders, Ro Khanna warn of AI’s potential negative consequences. Morons like Ro Khanna and Bernie Sanders will stop all the good AI can do to protect their religion. Good intentions but bad outcomes is ok for these socialists/commie.”

Vinod Khosla is a well-known Indian-American entrepreneur, venture capitalist, and technology investor. Born in 1955 in India, Khosla began his academic journey as an electrical engineer at the Indian Institute of Technology (IIT) Delhi, later earning a Master’s degree in Biomedical Engineering from Carnegie Mellon University. His career took off at Sun Microsystems, where he was part of the founding team that contributed to the company’s early success.

Khosla gained significant recognition as a co-founder of Kleiner Perkins Caufield & Byers, one of Silicon Valley’s most influential venture capital firms, focusing primarily on technology investments. In 2004, he established Khosla Ventures, which invests in clean technology, biotechnology, and disruptive startups. Known for his bold investment strategies and advocacy for technological innovation, Khosla has played a pivotal role in shaping the investment landscape of Silicon Valley, often taking high-risk bets that challenge conventional approaches.

The recent exchange between Khosla and the lawmakers followed a town hall meeting at Stanford University on February 20, 2026. During this event, Sanders articulated concerns that artificial intelligence is advancing at a pace that existing economic and political systems cannot adequately manage. He further questioned Silicon Valley’s assertions that AI will inherently deliver broad public benefits, recalling similar claims made during previous technological advancements that ultimately resulted in increased wealth and power concentration.

This clash between Khosla and U.S. lawmakers underscores a broader tension at the intersection of technology, policy, and societal oversight. It reflects the ongoing debate about how rapidly emerging technologies, particularly artificial intelligence, should be guided, regulated, and integrated into public life. Advocates like Khosla emphasize the transformative potential of AI in addressing complex global challenges, from healthcare innovations to energy efficiency. They argue that excessive regulation could stifle progress and limit the benefits that AI could provide.

On the other hand, critics such as Sanders and Khanna highlight the necessity for caution, stressing that technological advancements often outpace the social, economic, and ethical frameworks required for responsible management. Their concerns are rooted in historical patterns where technological optimism has sometimes led to concentrated wealth and power, along with unforeseen societal consequences.

The ongoing dialogue between Khosla and lawmakers illustrates the complexities surrounding the development and implementation of artificial intelligence, a technology that promises significant advancements but also raises critical ethical and regulatory questions.

According to The American Bazaar, this exchange is part of a larger conversation about the future of AI and its impact on society.

Spyware Can Take Control of Your Phone in Seconds

ZeroDayRAT spyware poses a significant threat to mobile users, enabling attackers to access personal data, including messages, location, and live camera feeds on both iPhone and Android devices.

In an age where digital security is paramount, the emergence of ZeroDayRAT spyware has raised alarms among mobile users. This sophisticated malware can compromise both iPhone and Android devices, granting attackers access to a wide range of personal information, including messages, notifications, location data, and even live camera feeds.

Unlike traditional malware that typically targets specific data, ZeroDayRAT functions as a comprehensive mobile compromise toolkit. Security researchers from iVerify, a mobile security and digital forensics company, have described it as a significant threat due to its extensive capabilities.

Once installed, ZeroDayRAT begins transmitting data back to a central dashboard controlled by the attacker. This dashboard allows cybercriminals to build detailed profiles of victims, tracking their daily activities, communication patterns, and app usage. Reports indicate that the dashboard even includes a live activity timeline, offering chilling insights into a user’s life.

What sets ZeroDayRAT apart from other malware is its advanced surveillance features. The spyware includes keylogging and live surveillance tools, enabling attackers to monitor users as they log into sensitive accounts or engage in private conversations. This level of intrusion is not merely hypothetical; it is a built-in capability of the spyware.

In addition to spying on personal communications, ZeroDayRAT targets financial applications directly. It reportedly includes tools designed to compromise digital payment systems such as Apple Pay and PayPal. The spyware can intercept banking notifications and utilize clipboard injection techniques to redirect cryptocurrency transactions to the attacker’s wallet. This means that even without full control of the device, the spyware can facilitate significant financial theft.

Alarmingly, ZeroDayRAT is openly marketed on platforms like Telegram, making it accessible to individuals without advanced hacking skills. This combination of power and accessibility heightens the threat it poses to mobile users.

Both Apple and Google have long warned against installing applications from outside their official app stores, as sideloading can weaken security measures. When users bypass these trusted platforms, they increase their risk of encountering spyware like ZeroDayRAT. Although no system is infallible, sticking to recognized app marketplaces can significantly reduce the chances of infection.

Advanced spyware is designed to remain hidden, often without triggering obvious warnings. However, there are subtle signs that may indicate an infection. Users should be vigilant for rapid battery drain, unexpected device heat, and unusual spikes in mobile data usage. Additionally, checking for unfamiliar apps or configuration profiles can help identify potential threats.

If users suspect their device may be compromised, it is crucial to act quickly. The first step is to disconnect from Wi-Fi and cellular data to prevent further data transmission to the attacker. Changing passwords should be done from a secure device, and enabling two-factor authentication (2FA) on all accounts is highly recommended.

Installing robust antivirus software on mobile devices can also help detect and remove malicious applications. Users should regularly review app permissions and remove any that seem unnecessary or suspicious. For iPhone users, checking for unknown configuration profiles in the settings is essential, while Android users should scrutinize installed apps and device administrator permissions.

In cases where a device is severely compromised, a factory reset may be necessary to eliminate the spyware. This process wipes the device clean, removing hidden malware components. However, users should back up only essential files and avoid restoring full system backups that could reintroduce malicious software.

Given that ZeroDayRAT specifically targets banking and cryptocurrency applications, users should closely monitor their financial accounts for any unusual transactions. If suspicious activity is detected, it is imperative to contact the bank immediately.

While the threat of spyware like ZeroDayRAT is unsettling, users can take proactive steps to safeguard their digital security. Only installing apps from trusted sources, avoiding links from unknown senders, and regularly updating operating systems can help mitigate risks. Additionally, utilizing reputable password managers and enabling 2FA can provide an extra layer of protection.

Ultimately, the responsibility for digital safety lies with users. By remaining cautious and informed, individuals can significantly reduce their risk of falling victim to spyware attacks. The question remains: Are tech companies and app stores doing enough to protect users from such sophisticated threats? This ongoing concern highlights the need for continued vigilance in the face of evolving cyber threats.

For more information on mobile security and to stay updated on the latest threats, visit CyberGuy.com.

Harvard Physicist Suggests Interstellar Object May Be Alien Probe

Harvard physicist Dr. Avi Loeb suggests that the interstellar object 3I/ATLAS may be more than a comet, potentially serving as an alien probe on a reconnaissance mission.

A massive interstellar object, known as 3I/ATLAS, has recently captured the attention of astronomers and scientists alike due to its unusual characteristics. Harvard physicist Dr. Avi Loeb has raised the possibility that this object could be more than just a typical comet, suggesting it may be on a reconnaissance mission.

Dr. Loeb, a science professor at Harvard University, expressed his concerns in an interview with Fox News Digital. “Maybe the trajectory was designed,” he said. “If it had an objective to sort of be on a reconnaissance mission, to either send mini probes to those planets or monitor them… It seems quite anomalous.”

The object was first detected in early July by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Chile. This discovery marks only the third time an interstellar object has been observed entering our solar system, according to NASA.

While NASA has classified 3I/ATLAS as a comet, Dr. Loeb pointed out an intriguing detail: an image of the object shows an unexpected glow appearing in front of it, rather than trailing behind, which is typical for comets. “Usually with comets, you have a tail, a cometary tail, where dust and gas are shining, reflecting sunlight, and that’s the signature of a comet,” he explained. “Here, you see a glow in front of it, not behind it.”

Measuring approximately 20 kilometers across, 3I/ATLAS is larger than Manhattan and is notably bright for its distance from the sun. However, Dr. Loeb emphasized that the most striking feature of this interstellar visitor is its trajectory.

“If you imagine objects entering the solar system from random directions, just one in 500 of them would be aligned so well with the orbits of the planets,” he stated. The object, which originates from the center of the Milky Way galaxy, is predicted to pass near Mars, Venus, and Jupiter—an event that, according to Loeb, is highly improbable to occur by chance. “It also comes close to each of them, with a probability of one in 20,000,” he added.

NASA has indicated that 3I/ATLAS will reach its closest point to the sun—approximately 130 million miles away—on October 30. Dr. Loeb remarked on the potential implications of the object’s nature, stating, “If it turns out to be technological, it would obviously have a big impact on the future of humanity. We have to decide how to respond to that.”

In a related note, astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics previously confused a Tesla Roadster launched into orbit by SpaceX CEO Elon Musk with an asteroid, highlighting the complexities of identifying celestial objects.

A spokesperson for NASA did not immediately respond to requests for comment from Fox News Digital.

According to Fox News Digital, the ongoing investigation into 3I/ATLAS may provide insights into the nature of interstellar objects and their potential significance in our understanding of the universe.

Are Social Media Platforms Operating Within Reasonable Guidelines?

Mark Zuckerberg’s recent testimony in a landmark social media addiction trial raises questions about the responsibility of tech companies in addressing addiction and mental health issues.

The term “reasonable” took center stage during Mark Zuckerberg’s recent testimony in a significant social media addiction trial held in Los Angeles Superior Court. The case, brought forth by a plaintiff who claims that social media platforms have contributed to her depression and suicidal thoughts, has drawn considerable attention to the ethical responsibilities of these companies.

As the trial unfolds, TikTok and Snapchat have already reached settlements, leaving Meta, the parent company of Facebook and Instagram, and Google’s YouTube as the remaining defendants. The implications of this case extend beyond the courtroom, as it raises critical questions about the role of social media in users’ mental health and well-being.

During the proceedings, Zuckerberg provided five hours of testimony, which concluded on February 18. Following his appearance, he exited the courthouse through a back door, a move that has sparked speculation about the pressures surrounding the case.

To gain a deeper understanding of the issues at play, Vikram R. Bhargava, an assistant professor of strategic management and public policy at the George Washington University School of Business, offers expert insight. Bhargava’s research focuses on the ethical and policy challenges posed by emerging technologies, including the dynamics of social media and technology addiction.

His work has been featured in prominent business ethics journals, addressing the responsibilities of tech companies in mitigating the risks associated with their platforms. Bhargava emphasizes the need for a clear definition of what constitutes “reasonable” conduct in the tech industry, particularly as it pertains to user engagement and mental health.

As the trial progresses, the outcomes could set important precedents for how social media platforms are regulated and held accountable for their impact on users. The case not only highlights individual experiences but also reflects broader societal concerns about the influence of technology on mental health.

For those interested in exploring this topic further, Bhargava is available for interviews. To arrange a discussion, please contact Claire Sabin at claire.sabin@gwu.edu.

This trial represents a pivotal moment in the ongoing conversation about the responsibilities of social media platforms and their role in society. As the legal proceedings continue, many are watching closely to see how the court will address the complex interplay between technology, addiction, and mental health.

According to GlobalNetNews, the outcomes of this case could have lasting implications for the future of social media regulation and the ethical obligations of tech companies.

Aalyria, Google Spinout Startup, Secures $100 Million in Funding

Aalyria, a startup spun out from Google, has secured $100 million in funding to enhance high-speed communication networks amid increasing U.S. government investment in defense technology.

Aalyria, a startup that emerged from Google in 2022, has successfully raised $100 million in a recent funding round led by Battery Ventures. This investment has elevated the company’s valuation to an impressive $1.3 billion.

Specializing in high-speed communication networks, Aalyria’s software is designed to improve service delivery across various environments, including land, sea, and space. This funding round coincides with a notable increase in U.S. government spending on defense technology and national security satellites, aimed at maintaining a competitive edge over China.

Google continues to hold a stake in Aalyria, which has attracted additional investment from firms such as J2 Ventures and DYNE.

Michael Brown, a general partner at Battery Ventures, highlighted the impact of SpaceX’s Starlink on the satellite industry. He noted that Starlink’s success in commercializing low Earth orbit satellites has heightened competitive concerns among satellite vendors. Starlink has been securing government contracts and appealing to consumers, particularly in regions underserved by traditional high-speed internet services. Brown stated, “They love Starlink but want alternatives, too.”

According to Brown, Aalyria plays a crucial role in this landscape. “When you have a diversity of satellite platforms, including in lower and mid-Earth orbit, the ability to route traffic between them has been nearly impossible. But they provide a seamless networking layer,” he explained.

Aalyria has already established contracts and secured research funding from a variety of partners, including Telesat, the U.S. Air Force, NASA, the Defense Department’s Defense Innovation Unit, the European Space Agency, and other government entities.

In the event of a natural disaster that disrupts ground-based cell towers, Aalyria’s Spacetime software enables a satellite communications network to quickly adapt and cover the affected area within seconds, rather than days. Brian Barritt, the company’s founder and technology chief, emphasized the importance of this capability, stating that in space, the software directs satellites in a constellation to automatically reconfigure to address gaps when other satellites are compromised.

Barritt acknowledged that one of the challenges in the market is that companies developing space-based networks often have significant investments at stake, leading them to consider building their own network orchestration solutions from the ground up. He noted that gaining their confidence can take time, but once they recognize the advantages of having their network operating system collaborate with others, orchestrate networks of networks, and monetize unused capacity, it can significantly shift the dynamics in Aalyria’s favor.

In addition to its software solutions, Aalyria offers Tightbeam, a laser-communication system that can be mounted on ships, planes, or other aircraft. This technology enables data transmission over distances exceeding 100 kilometers, achieving speeds comparable to those of fiber optic internet.

This funding round and the ongoing developments in Aalyria’s technology come at a pivotal time as the U.S. government increases its investment in defense and satellite technology, further solidifying the company’s position in the market.

According to The American Bazaar, Aalyria’s innovative approach to communication networks positions it as a key player in the evolving landscape of satellite technology.

Sundar Pichai Unveils $15 Billion AI Investment in India’s Visakhapatnam

Sundar Pichai announced a $15 billion investment in artificial intelligence during the AI India Impact Summit, highlighting Visakhapatnam’s emergence as a global AI hub.

During the AI India Impact Summit held in New Delhi, Sundar Pichai, the CEO of Google and Alphabet, announced a significant $15 billion investment aimed at advancing artificial intelligence (AI) in India. Pichai emphasized the transformative potential of AI and its role in shaping the future of technology, particularly in emerging economies.

Speaking on the fourth day of the summit, Pichai remarked on the remarkable evolution of Visakhapatnam, a coastal city that Google has chosen as a focal point for its AI initiatives. He noted that the city is poised to become a major center for AI development as part of Google’s long-term strategy in India.

“Through Visakhapatnam, I remember it being a quiet and modest coastal city brimming with potential. Now, in that same city, Google is establishing a full-stack AI hub, part of our $15 billion infrastructure investment in India,” Pichai stated. He expressed his surprise at the city’s transformation into a global AI hub, highlighting the hub’s future capabilities, including gigawatt-scale computing and a new international subsea cable gateway.

Pichai underscored the significance of AI as a transformative force, stating that it represents “the biggest platform shift of our lifetimes.” He believes that AI has the potential to accelerate progress across various sectors and help emerging economies overcome traditional barriers to growth.

“The product shows what’s possible when humanity dreams big, and no technology has me dreaming bigger than AI,” he said. Pichai pointed out that while the potential for AI is immense, achieving its benefits is not guaranteed and requires concerted effort.

He highlighted the role of AI in advancing scientific discovery, citing the groundbreaking work of Google DeepMind in protein structure prediction. “For 50 years, predicting protein structures was a grand challenge that stalled drug discovery. Demis Hassabis and his team at Google DeepMind asked an audacious question: how could we use AI to solve this? That question led to AlphaFold,” Pichai explained.

This breakthrough, which recently won a Nobel Prize, has condensed decades of research into an open-access database that is now utilized by over 3 million researchers in more than 190 countries. These researchers are leveraging the database to develop malaria vaccines, combat antibiotic resistance, and tackle other critical health challenges.

Pichai further elaborated on the diverse applications of AI within the scientific community, from cataloging DNA disease markers to creating AI agents that serve as partners in research. “We must be equally bold in tackling problems in regions that have lacked access to technology,” he stressed.

In conclusion, Pichai reiterated the importance of responsible and inclusive AI development, emphasizing the need to ensure that the benefits of this technology reach all segments of society. His remarks at the summit reflect a commitment to fostering innovation and addressing global challenges through AI.

This article was republished with permission from Free Press Journal.

Microsoft Appoints Asha Sharma as Gaming Chief Amid Nepotism Claims

Microsoft’s appointment of Asha Sharma as the new head of its gaming division has sparked controversy, with accusations of “Indian nepotism” emerging on social media.

Microsoft announced on Friday that Asha Sharma will succeed Phil Spencer as the executive vice president and chief executive officer of its gaming division. Spencer, who has been with the company for 38 years, is retiring, marking a significant leadership transition for the tech giant’s gaming business.

Sharma, who previously led product development for Microsoft’s artificial intelligence models and services, is stepping into a role that includes overseeing the Xbox brand. Her appointment comes as part of a broader strategy to integrate AI into Microsoft’s offerings.

However, the announcement was met with immediate backlash on social media, where some users criticized the decision to promote Sharma. A vocal minority accused Microsoft of engaging in “Indian nepotism,” a term that quickly gained traction across various gaming forums and platforms like X.

The leadership changes at Microsoft do not end with Sharma. Sarah Bond, who has been serving as president of Xbox, is also set to step down. Matt Booty, the current head of game studios, will transition to the role of chief content officer and report directly to Sharma.

In a company blog post, CEO Satya Nadella outlined the new leadership structure, emphasizing the next phase for Microsoft’s gaming business. Sharma’s experience in building consumer products was cited as a key factor in her selection for the role.

Sharma has a long history with Microsoft, having worked with the company for over a decade. She initially joined the marketing division before leaving in 2013. After spending time at Instacart and Meta, she returned to Microsoft two years ago to take on a senior leadership role focused on core AI products.

Despite her qualifications, Sharma’s promotion has faced scrutiny. Critics on X questioned her lack of direct experience in the gaming industry, with one user stating, “Asha Sharma, the new head of Xbox, is an AI executive with no background in gaming.” Another user linked her promotion to a broader anti-immigrant sentiment, arguing that Microsoft has become synonymous with “Indian nepotism.”

The criticism intensified, with some users pointing to Sharma’s LinkedIn profile to argue that she had never held a position for more than four years, questioning her long-term leadership experience. Others, however, defended the decision, asserting that a chief executive does not need to be a gamer to effectively lead a global gaming business. Some commentators suggested that the backlash against Sharma may reflect underlying racism toward Indians in the tech industry.

The timing of this leadership change is particularly complex for Xbox. Following years of fierce competition with Sony and Nintendo, Spencer acknowledged in 2024 that the Xbox One had “lost the worst generation to lose.” In response, Microsoft has made significant investments to expand its reach, including a $69 billion acquisition of Activision Blizzard, while also cutting more than 2,500 jobs and closing multiple studios since 2024.

In an email to staff, Sharma sought to reassure employees and long-time players, stating, “We will recommit to our core Xbox fans and players, those who have invested with us for the past 25 years, and to the developers who build the expansive universes and experiences that are embraced by players across the world.” She further emphasized a renewed commitment to Xbox, starting with the console that has shaped the brand’s identity.

The ongoing debate surrounding Sharma’s appointment highlights the complexities of leadership transitions in the tech industry, particularly in a landscape that is increasingly influenced by global talent and diverse backgrounds. As Microsoft navigates this new chapter, the implications of these changes will be closely watched by both industry insiders and consumers alike.

According to The American Bazaar, the reactions to Sharma’s promotion underscore the challenges that come with leadership changes in a competitive market.

Magure Achieves ISO Certifications for Reliable AI System Development

Magure, a UAE-based enterprise AI company, has achieved ISO 9001:2015, ISO/IEC 27001:2022, and ISO/IEC 42001 certifications, underscoring its commitment to building reliable and secure AI systems.

Magure, an enterprise AI company based in the United Arab Emirates, has announced a significant achievement: the attainment of ISO 9001:2015, ISO/IEC 27001:2022, and ISO/IEC 42001 certifications. This milestone highlights the company’s dedication to developing AI systems that are not only reliable but also secure and responsibly managed.

As organizations increasingly transition from experimenting with artificial intelligence to integrating it into mission-critical operations, trust has become a crucial factor for success. The need for quality, security, and responsible governance in AI deployment is now a foundational requirement rather than an optional consideration.

“As AI systems become more autonomous and deeply integrated into business operations, enterprises need more than innovation—they need assurance,” stated Akhil Koka, CEO of Magure. “These certifications validate the way Magure builds and manages AI systems and reinforce our mission to help enterprises scale AI with confidence, accountability, and long-term trust.”

With these certifications, Magure joins a select group of organizations worldwide and stands out as one of the early adopters in the UAE to demonstrate compliance with standards related to quality management, information security, and AI management systems. This accomplishment solidifies Magure’s position as a trusted partner for enterprises looking to deploy AI at scale.

As AI becomes increasingly embedded in core business functions, enterprises face growing challenges related to operational reliability, data security, regulatory compliance, and ethical oversight. The certifications obtained by Magure reflect a comprehensive approach to addressing these challenges throughout the entire AI lifecycle.

The ISO 9001:2015 certification for Quality Management Systems validates Magure’s quality management practices, ensuring that AI solutions are designed, delivered, and continuously improved through consistent and repeatable processes. This framework supports reliable, production-grade deployments for enterprises.

ISO/IEC 27001:2022 for Information Security Management Systems confirms that information security, privacy protection, and operational resilience are integral to Magure’s platforms and services. This certification safeguards enterprise data and AI operations throughout the AI lifecycle.

ISO/IEC 42001:2023, recognized as the world’s first international standard for Artificial Intelligence Management Systems, acknowledges Magure’s structured approach to managing AI responsibly. This certification embeds transparency, accountability, and oversight into the governance and operation of AI systems.

Together, these standards create a unified foundation for enterprise AI that can be trusted in real-world, regulated, and high-impact environments.

Magure’s ISO certifications align with the broader vision for responsible and secure AI adoption in the UAE. The principles embedded in ISO 9001, ISO/IEC 27001, and ISO/IEC 42001 closely reflect the expectations set by initiatives such as the UAE National AI Strategy 2031, the Dubai International Financial Centre’s data protection framework, and Dubai’s AI security policies. These frameworks emphasize trust, accountability, and resilience at the core of enterprise AI systems.

By aligning internationally recognized ISO standards with regional frameworks, Magure empowers enterprises operating in the UAE and beyond to adopt AI systems that are secure, well-governed, and designed for long-term trust.

Central to Magure’s platform strategy is MagOneAI, a unified, end-to-end agentic AI platform designed to assist enterprises in building, deploying, and managing autonomous AI applications that seamlessly integrate with existing data sources and operational workflows.

The three ISO standards are directly embedded into the operations of MagOneAI. Quality by design, aligned with ISO 9001, ensures that standardized, lifecycle-wide processes govern the design, deployment, monitoring, and improvement of agentic AI applications, delivering predictable performance from experimentation to production.

Security by default, aligned with ISO/IEC 27001, incorporates role-based access controls, encrypted data handling, environment segregation, continuous monitoring, and audit-ready logging to protect sensitive enterprise data as AI agents operate autonomously.

Responsible AI management, aligned with ISO/IEC 42001, introduces clear accountability and transparency into agent behavior, alongside policy-driven controls, risk management, and lifecycle governance. This ensures that AI systems remain observable, controllable, and compliant as they scale.

This integrated approach allows enterprises to move beyond isolated AI pilots and confidently deploy autonomous, production-grade AI systems.

The same ISO-aligned principles extend across Magure’s broader AI ecosystem. MagLabs, Magure’s use-case discovery and AI workflow environment, applies these standards from early experimentation through operational readiness. Additionally, MagVisionIQ, its computer vision platform, operates under the same disciplined quality, security, and responsible AI practices for real-world deployments.

Together, these platforms provide enterprises with a consistent and governed foundation for scaling AI without fragmentation as use cases grow in complexity and impact.

According to The American Bazaar, Magure’s commitment to these standards positions it as a leader in the responsible deployment of AI technologies.

Nobel Laureate Supports Musk and Gates on Future Job Reduction

As automation and artificial intelligence reshape the workforce, a Nobel laureate suggests that future generations may enjoy more free time and fewer traditional jobs.

On a serene morning in Stockholm, a Nobel Prize-winning physicist observes a robotic arm pouring coffee with remarkable precision. This small act serves as a microcosm of a much larger transformation taking place in the world of work.

“Your grandchildren will probably work less than you,” he states calmly. “Maybe a lot less.”

While offices outside buzz with activity and deadlines loom, inside research labs and warehouses, machines are increasingly capable of performing tasks that once required human intellect. From drafting emails and analyzing contracts to diagnosing illnesses and even generating software code, the capabilities of automation are expanding rapidly.

The pressing question many individuals find themselves pondering is no longer a matter of science fiction: If machines can do my job, what happens to me?

A Structural Shift, Not Just Another Tech Cycle

When Nobel laureates align their views with influential figures like Elon Musk and Bill Gates, it captures public attention. Several esteemed scientists, including theoretical physicist Giorgio Parisi, contend that the rise of artificial intelligence and robotics signifies a shift akin to the Industrial Revolution rather than merely an evolution of technology.

Musk envisions a future characterized by “universal high income,” where the necessity of work becomes optional. Gates similarly foresees AI systems generating “a lot of free time” by managing mundane tasks.

According to these Nobel physicists, productivity is set to soar, human labor hours will diminish, and the conventional notion of a lifelong job may not endure through the century. The trajectory they suggest points toward a future with significantly less compulsory work.

Automation Is Already Here

The evidence of this shift is evident and does not require a telescope to observe. Modern warehouses operate with fleets of autonomous robots, while call centers utilize AI agents to manage thousands of conversations simultaneously. Hospitals are deploying algorithms to analyze scans and identify anomalies.

Historically, automation has eliminated certain jobs while creating new ones; farmers transitioned to factory workers, and factory workers evolved into office employees. However, this time, the landscape may be different.

AI is not limited to replacing physical labor; it also takes on cognitive tasks. It can draft reports, design systems, optimize logistics, and even write self-improving code. Consequently, the economy may maintain or even increase productivity with fewer full-time workers, leading to a society that is richer in productivity but potentially poorer in traditional employment opportunities.

The Paradox of Abundance

Theoretically, this shift should yield greater prosperity. If machines can produce more with less human labor, everyone stands to benefit. Yet, wages remain tethered to hours worked, raising concerns about income distribution. Musk refers to this era as the “age of abundance,” while economists explore models for guaranteed income or taxation of AI-driven capital.

The more profound question, however, is psychological: What occurs when work ceases to be the organizing principle of daily life?

The Hidden Risk: Emptiness

Jobs, even those that are less than ideal, provide a structure to our lives—waking up, commuting, completing tasks, taking breaks, and experiencing small victories. Removing this structure can lead to a sense of disorientation.

The potential danger of a world with fewer jobs is not laziness but rather a sense of meaninglessness. Without intentional design, free time may devolve into passive consumption—endless scrolling, distractions, and algorithm-driven habits.

A Nobel laureate recently articulated this concern: “I’m not afraid of machines working. I’m afraid of humans forgetting what to do when they are not working.”

How to Prepare for a Low-Work Future

If automation continues on its current trajectory, preparation may shift from traditional career paths to resilience. Discussions among technologists, economists, and scientists often highlight three key themes:

First, individuals should cultivate skills driven by curiosity rather than solely for employment. Interests such as art, language, gardening, programming, and music can endure beyond the fluctuations of job markets.

Second, prioritizing financial stability over status can provide flexibility in a world characterized by shifting roles and shorter contracts.

Lastly, strengthening community ties becomes essential as traditional work structures weaken. Those who thrive may not be the busiest individuals today but rather those who have learned to navigate life without constant direction.

A Future That Feels Like a Long Sunday

Imagine a weekday that resembles a leisurely Sunday afternoon. Your AI assistant has efficiently sorted your inbox, autonomous vehicles glide silently outside, and grocery stores operate largely through automation.

You may still work, but perhaps only 10 to 15 focused hours per week, engaging in distinctly human activities such as creativity, empathy, negotiation, and invention. Income might derive from state support or productivity-sharing mechanisms, supplemented by flexible, chosen contributions.

This future will not arrive abruptly; rather, it will gradually unfold—one automated system at a time.

A Civilizational Crossroads

For centuries, technological advancements have reduced the need for physical labor. Electricity, machinery, and computing have consistently shortened work hours. We may now be approaching a pivotal moment where compulsory labor declines significantly.

The central challenge is no longer merely about how we earn a living but rather how we derive meaning when work is no longer the core of our identity. The traditional 40-year, full-time career may prove to be a fleeting historical phase.

The next phase prompts a deeper inquiry: If work becomes optional, what will give life its purpose?

As experts continue to analyze these shifts, the implications for society remain profound. Will AI eliminate most jobs? While many routine tasks are already automated, experts suggest that total human working hours may significantly decline. Will individuals personally lose their jobs? It is more likely that unstable, contract-based, or part-time work will replace lifelong employment. Which jobs are more resilient? Roles requiring complex human interaction, creativity, care, and physical presence tend to adapt more slowly to automation. Ultimately, whether less work is beneficial depends on income policy, social structures, and how individuals choose to utilize their newfound free time. Managed effectively, it could enhance well-being; poorly managed, it could exacerbate inequality and social disconnection.

These insights reflect the evolving landscape of work and the need for society to adapt to a future where the nature of employment is fundamentally transformed, according to GlobalNetNews.

The Start of the Robotaxi Price War: Key Insights and Implications

The emergence of robotaxis is reshaping urban transportation, with companies like Waymo leading the charge in a competitive market marked by significant price differences and mixed safety records.

In several American cities, the future of transportation is already here: you can summon a driverless car with just a tap on your smartphone. These autonomous vehicles offer a ride without the small talk, wrong turns, or the need to tip. A driverless ride from Waymo in San Francisco averages around $8.17, while a traditional Uber ride in the same city costs approximately $17.25. The robotaxi price war has officially begun.

Waymo, a subsidiary of Alphabet (Google’s parent company), is currently the leader in the driverless car market. The company has provided an impressive 15 million driverless rides since its inception, with current figures showing about 400,000 rides per week. Valued at $126 billion, Waymo’s services are available in several major cities, including Phoenix, the San Francisco Bay Area, Los Angeles, Austin, Atlanta, and Miami. By 2026, the company plans to expand its reach to Dallas, Denver, Washington, D.C., London, Tokyo, and more.

In contrast, Tesla, which launched its robotaxi service in Austin last June, has made slower progress. The company has deployed roughly 31 vehicles, and each ride still requires a safety monitor to be present. This level of supervision highlights the challenges Tesla faces in achieving full autonomy.

Amazon’s Zoox is another player in the robotaxi arena, introducing a unique pod that lacks a steering wheel and can drive in both directions. Currently, rides in Las Vegas and San Francisco are free as the company awaits regulatory approval to begin charging for its services.

Waymo’s technology relies on a combination of cameras, lidar (laser radar that creates a 3D map of the environment), and traditional radar, allowing it to operate effectively in total darkness and adverse weather conditions. In contrast, Tesla’s approach is more cost-effective, utilizing only cameras—eight in total—allowing them to offer rides at a lower rate of $1.99 per kilometer.

However, the safety of these autonomous vehicles remains a topic of concern. Waymo has reported 1,429 incidents to regulators since 2021, resulting in 117 injuries and two fatalities. The company asserts that it has 80% fewer injury crashes than human drivers, but the National Highway Traffic Safety Administration (NHTSA) has documented several safety issues, including three software recalls, one of which was issued last December for the vehicle’s failure to stop for stopped school buses.

Personal experiences with these robotaxis can vary significantly. One individual recounted a ride where the vehicle dropped her off a full mile from her intended destination, leaving her with no option to correct the course. With no human driver to assist, she was left at the mercy of the robotaxi’s navigation system.

When a robotaxi encounters a situation it cannot navigate, a human operator in a remote center can intervene by viewing the car’s cameras and guiding it through the confusion. During a Senate hearing, Waymo acknowledged that some of these remote operators are based in the Philippines, a revelation that did not sit well with lawmakers.

As urban transportation evolves, the economics of car ownership are also changing. With robotaxis operating for over 15 hours a day and costing less than traditional car expenses such as gas and insurance, the notion of owning a vehicle may soon feel akin to maintaining a gym membership that goes largely unused.

The future of driving appears to be steering toward a reality where no one is behind the wheel. For those who still believe self-driving cars are a thing of the future, it may be time to reconsider; the ride is already underway.

According to Fox News, the robotaxi landscape is rapidly changing, with companies vying for dominance in a market that promises to redefine urban mobility.

Panera Bread Data Breach Exposes Personal Information of 5.1 Million Customers

Panera Bread has confirmed a data breach that has exposed the personal information of approximately 5.1 million customers, prompting class-action lawsuits and concerns over identity theft.

Panera Bread has confirmed a significant cybersecurity incident that has compromised the personal information of millions of its customers. The hacking group ShinyHunters has claimed responsibility, stating that it stole a vast amount of customer records, leading to serious concerns for anyone who has interacted with the popular bakery chain.

Earlier this year, ShinyHunters added Panera Bread to its data leak site, initially asserting that it had stolen over 14 million customer records. The stolen data reportedly includes names, email addresses, phone numbers, home addresses, and account-related information. In response, Panera Bread acknowledged the breach, describing the exposed data as customer “contact information.” The company has since contacted law enforcement and taken steps to address the situation, although it has not disclosed specific technical details regarding the attack or whether customers need to take any immediate actions.

Even seemingly innocuous “contact information” can pose significant risks when it falls into the wrong hands. Such data can be exploited for identity theft, targeted phishing attacks, and social-engineering scams that are increasingly convincing.

ShinyHunters claims that the attackers accessed Panera’s systems through Microsoft Entra single sign-on (SSO). While Panera has not confirmed this assertion, it aligns with recent warnings from cybersecurity firm Okta about a rise in voice-phishing attacks targeting SSO platforms. In these attacks, criminals impersonate IT or helpdesk staff, pressuring employees to approve authentication requests or enter login credentials on fraudulent SSO pages. This method relies on human trust rather than technical vulnerabilities, making it particularly effective.

Initially, the claim of 14 million affected customers suggested a massive breach. However, researchers at Have I Been Pwned? later clarified that while the attackers stole 14 million records, this did not equate to 14 million unique individuals. After analyzing the leaked dataset, researchers estimate that the breach has impacted approximately 5.1 million unique customers. The exposed information includes email addresses, names, phone numbers, and physical addresses.

This distinction is crucial, but it does not eliminate the associated risks. Once data is publicly released, it can quickly circulate across criminal forums and be reused for malicious purposes for years to come.

ShinyHunters reportedly attempted to extort Panera Bread before releasing the stolen data. When those efforts failed, the group published a 760MB archive containing millions of customer records on its leak site. This incident reflects a broader trend in cybercrime, where many groups now focus on stealthily stealing data and threatening public exposure rather than deploying ransomware to lock systems. Such attacks are often faster, harder to detect, and can be just as profitable.

The breach has already led to legal repercussions, with multiple class-action lawsuits filed in U.S. federal court. These lawsuits allege that Panera failed to adequately protect customer data, claiming that the company knew or should have known about existing security vulnerabilities. The lawsuits seek damages, improved security practices, and long-term identity theft protection for affected customers. Panera has not publicly commented on the ongoing litigation.

This is not the first time Panera Bread has faced a significant security lapse. In 2018, a cybersecurity researcher revealed that the company had left millions of customer records exposed online in plain text, which subsequently led to lawsuits and settlements. Repeated breaches often indicate deeper systemic challenges, as large organizations can struggle to secure cloud services, identity systems, and employee access at scale. When attackers target identity platforms rather than infrastructure, a single misstep can expose millions of records.

As customers often remain unaware of the risks associated with such breaches until weeks or months later, it is essential to take proactive measures to limit the potential fallout from a breach. If you have ever created a Panera Bread account, it is advisable to reset your password immediately. If you have reused that password elsewhere, those accounts may also be at risk. Cybercriminals frequently test breached passwords across various platforms, including email, shopping, and banking sites.

Utilizing a password manager can help generate strong, unique passwords for each account and securely store them, eliminating the need to reuse credentials. Many password managers also provide alerts if your email or passwords appear in known data breaches, allowing for swift action to secure your accounts.

Implementing two-factor authentication (2FA) adds an additional layer of security during the login process, typically through an app or device you control. Even if someone obtains your password through phishing or a breach, 2FA makes it significantly more challenging for them to access your account.

Cybercriminals often follow up breaches with fake emails or in-app messages that appear to offer assistance or security updates. It is crucial to verify the sender’s identity and avoid clicking on links within such messages. When in doubt, access the app or website directly instead of responding to the message.

Identity theft becomes a genuine risk when names, email addresses, phone numbers, and physical addresses are exposed. Identity theft protection services can monitor your personal information, alert you if it appears on the dark web, and watch for attempts to open new accounts in your name. In the event of a breach, these services often provide recovery support to help freeze accounts, dispute fraudulent activity, and guide you through the cleanup process.

Scammers do not rely on a single breach; they often combine leaked data with information from data broker sites to create detailed profiles. Data removal services can assist in removing your phone number, home address, and other personal details from numerous sites, making it more difficult for criminals to target you with convincing scams or identity fraud.

The recent data breach at Panera Bread serves as a stark reminder that even well-known brands can become significant targets for cybercriminals. While the company asserts that only contact information was exposed, such data can still fuel scams and identity theft long after the initial headlines fade. Remaining vigilant and proactive in the wake of breach news is essential for safeguarding your digital life.

For further information on protecting your personal data and navigating the aftermath of a breach, consult resources from cybersecurity experts.

According to Fox News, the situation continues to evolve as Panera Bread addresses the fallout from this incident.

FDA Resumes Review of Moderna’s mRNA Influenza Vaccine

The FDA has agreed to review Moderna’s application for the first mRNA-based flu vaccine after initially declining to do so, following a meeting with the company.

The Food and Drug Administration (FDA) has reversed its earlier decision and will now review Moderna’s application for the first mRNA-based flu vaccine. This change comes after a Type A meeting between Moderna and the agency, where the company proposed full approval for adults aged 50 to 64, as well as accelerated approval for those 65 and older, contingent on additional studies involving seniors.

The FDA has set a target date of August 5 for completing its review, which could allow the vaccine to be available in time for the upcoming flu season. This decision marks a significant step in the development of mRNA technology for flu prevention, a field that has faced scrutiny and skepticism from various quarters.

Critics of mRNA technology, including Robert F. Kennedy Jr. and other officials from the U.S. Department of Health and Human Services, have previously expressed doubts about the efficacy and safety of mRNA vaccines for respiratory viruses. Their concerns have led to the withdrawal of some federal funding related to mRNA vaccine research.

As the FDA prepares to review Moderna’s application, experts from George Washington University (GWU) are available to provide insights into the implications of this decision and the potential impact of mRNA technology on public health. Faculty members include Elizabeth Choma, a pediatric nurse practitioner and clinical assistant professor; Jennifer Walsh, a clinical assistant professor focused on pediatrics and health assessment; and Emily Smith, an associate professor specializing in infectious diseases and epidemiology.

Other experts from GWU include Asefeh Faraz Covelli, an associate professor in the Family Nurse Practitioner program; April Barbour, an internist and associate professor of medicine; and Mia Marcus, an associate clinical professor and primary care provider. Additionally, Maria Portela Martinez, an assistant professor of emergency medicine, and Andrew Meltzer, a professor of emergency medicine and chief of the clinical research section, are also available for commentary.

David Diemert, the clinical director of the GW vaccine research unit, and Jose Lucar, an associate professor of infectious diseases, are among the other faculty members who can provide expert opinions on the evolving landscape of vaccine development. Kelly Gebo, the dean of the GW Milken Institute School of Public Health, brings her expertise as an infectious disease physician and epidemiologist, focusing on disparities in healthcare access and outcomes.

The reopening of the review process for Moderna’s mRNA flu vaccine underscores the ongoing evolution of vaccine technology and its potential role in combating seasonal influenza. As the FDA moves forward with its review, the medical community and the public will be closely watching the developments surrounding this innovative approach to flu vaccination.

For further insights and to schedule interviews with GWU experts, interested parties can contact Katelyn Deckelbaum at katelyn.deckelbaum@gwu.edu.

According to Newswise, this decision could pave the way for a new era in flu prevention.

Trendy Tech Terms Influencing Internet Culture in 2023

Five key tech terms—slop, burner accounts, shadowbans, clickbait, and targeted ads—are shaping the way users interact with social media and perceive online content.

If your social media feed feels noisier, stranger, or more manipulated than it used to, you’re not alone. The internet has developed its own language, and buzzwords are quietly influencing what you see, what you don’t see, and how companies target you. From viral “slop” content to shadowbans and targeted ads, these terms play a significant role in how information spreads and how platforms manage user accounts.

Understanding these five key phrases can help you navigate the complexities of your digital life and regain control over your online experience.

Slop: The Noise in Your Feed

The term “slop” refers to mass-produced, low-effort digital content that is often generated quickly by artificial intelligence or created solely for clicks and engagement. This type of content includes spammy articles, recycled videos, misleading thumbnails, and other materials that lack real value.

While slop may seem harmless, it can crowd out reliable information, spread misinformation, and overwhelm your feed with noise instead of useful content. Social media platforms often struggle to control slop because it is designed to manipulate algorithms.

Fortunately, you can take back control by curating your feed and filtering out the noise.

Burner Accounts: The Hidden Identities

A burner account is a secondary or anonymous social media account used to conceal a person’s real identity. Some individuals create burner accounts for privacy, while others use them for trolling, harassment, or secretly viewing content.

Because burner accounts are difficult to trace, they are frequently associated with online harassment, fake engagement, or manipulation of public conversations. While platforms attempt to detect suspicious behavior, many burner accounts still evade detection.

Being cautious with unknown accounts can help protect your safety online.

Shadowbans: The Silent Filters

A shadowban can affect not only creators but also what users see. Social media platforms sometimes limit the visibility of specific accounts, topics, or types of content without notifying users. This means that posts may be hidden, pushed lower in your feed, or never shown to you at all, even if you follow the account.

This type of filtering is often driven by algorithms designed to reduce spam, harmful content, or policy violations. However, it can also shape your perception of what is popular or trending without your awareness.

Understanding shadowbans can help you recognize how your feed is curated and the potential biases that may influence your online experience.

Clickbait: The Allure of Misleading Headlines

Clickbait refers to exaggerated, misleading, or emotionally charged headlines designed to attract attention and drive clicks. While some clickbait may be harmless, it often leads to low-quality or misleading content that fails to deliver on its promises.

Clickbait exploits curiosity, fear, or surprise—powerful emotional triggers that drive engagement. This tactic is commonly employed by low-quality publishers and viral content farms.

Being aware of clickbait can help you discern between valuable content and sensationalized headlines.

Targeted Ads: The Personalization of Advertising

Targeted ads utilize data about your behavior, searches, location, and interests to deliver personalized advertisements. This is why you might see ads related to something you recently searched for or discussed near your phone.

Advertisers build detailed profiles based on browsing activity, app usage, and online behavior to predict what you are most likely to buy or engage with. This reliance on data collection means that adjusting your privacy settings, limiting ad tracking, and regularly reviewing app permissions can reduce how much data advertisers use to profile you.

If targeted ads feel a little too accurate, it’s because data brokers are constantly collecting and selling your information. Beyond adjusting privacy settings, consider removing your personal data from broker sites to minimize the profile advertisers build around you.

The modern internet operates on more than just technology; it thrives on attention, algorithms, and influence. Understanding terms like slop, shadowban, and targeted ads can help you recognize how platforms shape your experience and how companies compete for your clicks. The more you understand these trends, the easier it becomes to filter out noise, protect your privacy, and maintain control over what you see online.

For further insights into trending internet terms or to have something explained, you can reach out at Cyberguy.com.

Wearable Robotics Transforming Human Mobility in Walking and Running

Wearable robotics, including Nike’s Project Amplify and the Hypershell X exoskeleton, are transforming how we walk and run, aiming to enhance movement rather than replace it.

In recent years, the field of robotics has expanded beyond the confines of factories and laboratories, making its way into our daily lives. Wearable robotics, which include powered footwear and lightweight exoskeletons, are emerging as a new consumer category designed to assist movement rather than replace physical effort.

Historically, innovations in sports technology have focused on enhancing speed and performance, often benefiting elite athletes. However, the focus is shifting towards accessibility and support for everyday users. Nike’s Project Amplify exemplifies this trend. Developed in collaboration with robotics partner Dephy, this system integrates a carbon plate within the shoe and a motorized cuff worn above the ankle. The cuff uses sensors to monitor stride patterns in real time, providing subtle assistance that feels natural and smooth, rather than forcing movement.

Previous attempts at creating powered footwear faced challenges due to the weight of batteries and motors, which made the devices feel cumbersome and unbalanced. Modern designs have addressed these issues by relocating energy storage to the ankle or hips, thereby reducing strain on the feet and improving overall balance. Enhanced battery technology and advanced motion sensors allow these systems to adapt to users’ strides dynamically, making the experience feel like an extension of the body. Nike aims for a commercial release of Project Amplify around 2028.

However, Nike is not the only player in this evolving market. The Hypershell X is another notable example, designed as a lightweight outdoor exoskeleton for hikers and long-distance walkers. This system wraps around the waist and legs, employing small motors to alleviate fatigue during climbs and on uneven terrain. The goal is straightforward: to help users go farther without feeling drained. Hypershell has also introduced the X Ultra, a more robust version tailored for steeper terrains and longer excursions, providing stronger assistance while remaining compact enough to wear under standard outdoor gear.

Dnsys has also entered the market with the X1 all-terrain exoskeleton, aimed at hikers and outdoor enthusiasts. Unlike earlier lab prototypes, the X1 has been successfully sold through crowdfunding and direct online orders, marking it as one of the early consumer-ready entries in the wearable robotics space.

Another innovative product is WIM from WIRobotics, a wearable robot that weighs approximately 3.5 pounds and supports natural hip movement while walking. This device is targeted at older adults, active individuals, and those recovering from minor injuries, providing assistance without the bulkiness of traditional medical devices.

The medical applications of wearable robotics have been developing for a longer time. Companies like Ekso Bionics and ReWalk have created powered exoskeletons that assist individuals with spinal cord injuries or strokes in standing and walking. These systems are primarily used in rehabilitation clinics and select personal mobility programs, demonstrating how wearable robotics have evolved from medical settings to consumer-oriented designs.

What unites these diverse products is a common goal: to actively assist movement rather than merely track it. Many individuals face barriers to physical activity that are not solely related to injury; hesitation often plays a significant role. Concerns about knee pain, fatigue, or the fear of slowing down others can deter people from engaging in physical activity. Wearable robotics aim to bridge this confidence gap by reducing fatigue and supporting joints, making movement feel more attainable for those who might otherwise avoid it.

Comparatively, the rise of e-bikes serves as a relevant analogy. Electric assistance has not eliminated cycling; instead, it has broadened the demographic of people who feel comfortable riding a bike. Similarly, powered footwear and wearable robotics could democratize walking and running, making these activities more accessible to a wider audience.

For some, this technology might mean replacing short car trips with walking, while for older adults, it could facilitate prolonged activity without excessive fatigue. Casual runners may find they can complete their workouts with energy to spare, rather than struggling through the final stretch. This shift is not about creating super athletes; it is about empowering more individuals to participate in physical activities.

Even if you are not inclined to use a powered exoskeleton or are not eagerly awaiting the arrival of motorized shoes in 2028, the implications of this technology are significant. For those who experience discomfort during long walks or skip runs due to fatigue concerns, wearable robotics are designed with these challenges in mind. The aim is not to transform anyone into a super athlete but to make movement feel more achievable.

For some, this could translate to walking an extra mile effortlessly, while for others, it might mean keeping pace with friends or feeling more confident about starting a new fitness routine. Wearable robotics are reshaping the conversation around fitness, shifting the focus from speed and performance to comfort and accessibility.

As wearable robotics continue to evolve, the question is not whether they will improve, but how society will choose to integrate them into daily life. If these technologies can help you walk and run with less strain, would you consider using them, or would you prefer to rely solely on your own efforts? This is a conversation worth having as we navigate the future of movement.

According to Fox News, the potential of wearable robotics to enhance everyday mobility is becoming increasingly clear.

Bill Gates to Meet Andhra Pradesh Chief Minister for Strategic Talks

Bill Gates is set to visit Amaravati, Andhra Pradesh, for strategic discussions with Chief Minister N. Chandrababu Naidu, focusing on health and artificial intelligence.

In a significant development highlighting the intersection of technology and governance, Bill Gates, co-founder of Microsoft and a prominent figure in the tech industry, is scheduled to visit Amaravati, the capital of Andhra Pradesh. His meeting with Chief Minister N. Chandrababu Naidu aims to explore opportunities for expanding cooperation in two critical areas: health and artificial intelligence (AI).

This visit underscores Gates’s ongoing commitment to global health and technological advancement while showcasing Andhra Pradesh’s ambition to emerge as a leader in these fields. As India rapidly advances its digital infrastructure and technological capabilities, the country has become a focal point for tech giants, thanks to its vast and diverse market.

Under Naidu’s leadership, Andhra Pradesh has been proactive in leveraging technology to enhance governance and public welfare. Naidu, often recognized as a tech-savvy leader, has played a crucial role in driving digital initiatives across the state, which include e-governance and smart city projects.

The discussions between Gates and Naidu are expected to focus on how AI can be utilized to improve healthcare delivery in the state. India faces numerous healthcare challenges, including a shortage of medical professionals and inadequate infrastructure, particularly in rural areas. AI holds the potential to address some of these issues by facilitating remote diagnostics, predictive analytics for disease outbreaks, and personalized medicine.

Gates’s insights, supported by the resources of the Bill & Melinda Gates Foundation, could be instrumental in developing solutions tailored to the specific needs of Andhra Pradesh. The meeting is also likely to explore collaborative projects that align with the Gates Foundation’s focus on global health issues, such as eradicating infectious diseases and enhancing maternal and child health.

Andhra Pradesh could serve as a pilot region for innovative health interventions that, if successful, might be scaled across India and other developing regions. Gates’s interest in AI aligns with a broader global trend, where technology is increasingly recognized as a catalyst for economic and social development.

AI, in particular, has the potential to revolutionize various sectors, from agriculture to education, offering unprecedented opportunities for growth and efficiency. For Andhra Pradesh, embracing AI could lead to improved agricultural productivity, enhanced educational outcomes, and more efficient public services.

This visit also reflects a symbiotic relationship between global tech leaders and regional governments. As tech companies seek to expand their presence in emerging markets, they find willing partners in governments eager to harness technology for development. This partnership is mutually beneficial: tech companies gain access to new markets and data, while governments receive the technological expertise and investment necessary to drive growth.

In conclusion, Bill Gates’s visit to Andhra Pradesh represents more than just a high-profile meeting. It symbolizes the potential for technology to transform societies and underscores the importance of strategic partnerships in realizing this potential. As Andhra Pradesh continues its journey toward becoming a tech-driven state, the insights and collaboration from Gates and his foundation could play a pivotal role in shaping its future. Both Gates and Naidu share a vision of leveraging technology for the greater good, and this meeting may mark a significant step toward achieving that vision.

According to GlobalNetNews.

AI Summit Sees Strong Attendance on Opening Day

The AI Summit in New Delhi attracted a significant crowd on its opening day, showcasing India’s growing role in the global artificial intelligence landscape.

The bustling metropolis of New Delhi, renowned for its vibrant culture and historic landmarks, has added another highlight to its profile by hosting the much-anticipated AI Summit. On its opening day, the conference drew an impressive crowd, reflecting the increasing interest and investment in artificial intelligence across India. The event served as a melting pot of innovation and collaboration, underscoring India’s expanding prowess in the AI sector.

India, with its vast pool of tech-savvy talent and a rapidly digitizing economy, has emerged as a formidable player in the global AI arena. The summit, held at the expansive Pragati Maidan, showcased this evolution. Attendees, ranging from industry leaders to tech enthusiasts, were greeted with a plethora of exhibits that highlighted the country’s advancements in AI technologies.

The significance of the summit extends beyond the impressive turnout. It marks a pivotal moment in India’s technological journey, as the nation seeks to position itself as a global hub for AI development. With a government eager to foster innovation and a private sector keen to capitalize on AI’s potential, the summit serves as a platform to bridge these ambitions. It is a space where ideas are exchanged, collaborations are forged, and future pathways are charted.

The opening day featured keynote speeches from prominent figures in the tech industry, both domestic and international. These speeches set the tone for the event, emphasizing the transformative potential of AI across various sectors, including healthcare, agriculture, finance, and education. The narrative was clear: AI is not merely a technological advancement but a powerful tool for societal change.

However, India’s AI journey is not without its challenges. As the country embraces this technology, it must navigate issues related to data privacy, ethical AI deployment, and the digital divide. The summit’s robust agenda, which includes panel discussions and workshops on these critical topics, indicates a proactive approach to addressing these concerns.

The event also highlighted the role of startups in driving AI innovation. India’s startup ecosystem, one of the largest in the world, is a hotbed of AI-driven solutions. Many of these startups were present at the summit, showcasing cutting-edge technologies that promise to revolutionize industries. Their participation underscores the entrepreneurial spirit fueling India’s AI ambitions.

International participation at the summit further emphasizes India’s growing influence in the AI sector. Delegates from various countries attended, exploring opportunities for collaboration and investment. This international interest reflects India’s strategic importance in the global tech landscape, particularly as nations seek to diversify their tech partnerships.

The AI Summit is more than just an exhibition; it is a reflection of India’s aspirations and capabilities. As the world grapples with the implications of AI, India is positioning itself not just as a participant but as a leader in shaping the future of this technology. The massive turnout on day one is a testament to the excitement and interest surrounding India’s AI journey.

As the summit progresses, it will be intriguing to see how the dialogues and discussions unfold, particularly in areas such as AI ethics, policy-making, and international collaboration. The outcomes of these conversations could significantly influence the trajectory of AI development in India and beyond.

In conclusion, the AI Summit in New Delhi is a landmark event that highlights India’s commitment to embracing and leading in the AI revolution. It is a celebration of innovation, a forum for critical discussions, and a catalyst for future growth. As the summit continues, all eyes will be on New Delhi, eager to see what the next chapter in India’s AI story will bring, according to GlobalNetNews.

Dhireesha Kudithipudi Leads First U.S. Open-Access Neuromorphic Computing Hub

Dhireesha Kudithipudi is spearheading the first open-access neuromorphic computing hub in the U.S. at the University of Texas at San Antonio, aiming to democratize artificial intelligence research.

Indian American computer scientist Dhireesha Kudithipudi is transforming the landscape of artificial intelligence (AI) in the United States. As the founding director of the MATRIX AI Consortium at the University of Texas at San Antonio (UTSA), she is at the forefront of launching THOR: The Neuromorphic Commons, the first open-access hub of its kind in the country.

Funded by the National Science Foundation, the THOR project seeks to democratize access to neuromorphic computing, a field that emulates the architecture of the human brain to process information. Unlike traditional silicon chips, which consume significant amounts of electricity regardless of the task, neuromorphic systems operate on an “event-based” model, activating only when new data is detected.

“THOR is the U.S. national hub for neuromorphic computing,” Kudithipudi stated. She also holds the Robert F. McDermott Chair in Engineering at UTSA. “We are democratizing the technology, expanding industry-academia partnerships, and serving as a catalyst for bringing neuromorphic computing closer to real-world applications.”

Historically, access to such advanced hardware has been limited to elite corporate laboratories or well-funded academic institutions. In contrast, UTSA’s new initiative functions similarly to a public library, allowing researchers and students nationwide to apply for free access to run experiments. This approach significantly lowers the barrier to entry for the next generation of engineers.

At the core of the hub is the SpiNNaker2 system, a substantial platform featuring approximately 400,000 processing elements. Developed in collaboration with SpiNNcloud, this hardware utilizes energy-efficient ARM-based cores, akin to those found in smartphones, to simulate the pulsing signals of biological neurons and synapses.

The practical implications of this energy efficiency are profound. According to the research team, neuromorphic chips have the potential to revolutionize medical devices. For instance, they could enable pacemakers to adapt in real-time to a patient’s physical distress or allow hearing aids to intelligently filter background noise without quickly draining their batteries.

In addition to energy savings, Kudithipudi and her colleagues are addressing the issue of “catastrophic forgetting,” a common flaw in AI systems where machines lose previously acquired knowledge when learning new information. By mimicking the brain’s “lifelong learning” capabilities, THOR could facilitate the development of AI that evolves continuously.

This initiative involves a nationwide collaboration, with contributions from experts at UT Knoxville, UC San Diego, and Harvard University. The official launch of THOR is scheduled for February 23, marking a significant milestone for UTSA’s newly established College of AI, Cyber and Computing.

For Kudithipudi, the overarching goal is to ensure that the future of computing is not only more powerful but also more accessible and sustainable for all.

The information for this article was sourced from The American Bazaar.

OnPhase Appoints Indian-American Sudarshan Ranganath as Chief Product Officer

OnPhase has appointed Sudarshan Ranganath as Chief Product Officer to enhance its AI-driven financial automation platform amid the evolving needs of modern finance departments.

OnPhase, a key player in the AI-driven financial automation sector, has announced the appointment of Indian American executive Sudarshan Ranganath as its new Chief Product Officer. In this pivotal role, Ranganath will guide the company’s product vision and execution, with a focus on scaling its unified platform to address the dynamic requirements of contemporary finance departments.

Ranganath joins the Tampa-based company at a time when digital transformation is rapidly reshaping the office of the CFO. With over 20 years of experience in business spend management and digital payments, he brings a wealth of knowledge in developing intelligent, cloud-based solutions designed to simplify complex financial workflows. His appointment is viewed as a strategic move aimed at enhancing OnPhase’s market presence and accelerating the adoption of its automated payment technologies.

“I am thrilled to be joining OnPhase at such an exciting time,” Ranganath stated, highlighting the transformative impact of AI on finance teams. He pointed out that CFOs are increasingly pressured to deliver strategic insights while maintaining stringent operational controls. Ranganath believes that OnPhase’s unified platform is essential for eliminating friction and reducing manual errors in financial processes.

Before taking on this new role, Ranganath served as Senior Vice President of Product Management and Strategy at Corcentric. During his tenure, he played a crucial role in driving revenue growth through both organic innovation and strategic acquisitions. He is also recognized for developing an AI-centric trading partner network aimed at modernizing B2B commerce.

Ranganath’s career includes leadership positions at notable companies such as Ellucian, Rivermine, and VeriSign, where he concentrated on SaaS transformations and international expansion. His extensive background in accounts payable and payment software aligns seamlessly with OnPhase’s core value proposition, as emphasized by Robert Michlewicz, CEO of OnPhase.

“He has worked at the intersection of product strategy, technology, and customer outcomes,” Michlewicz remarked. “His leadership will be instrumental as we take our platform and our company to the next level.”

For over 25 years, OnPhase has provided organizations with comprehensive tools to manage the entire lifecycle of an invoice, from capture to final payment. By consolidating these functions into a single platform, the company aims to eliminate the data silos that often hinder traditional finance departments.

Currently recognized on both the Deloitte Technology Fast 500 and the Inc. 5000 lists, OnPhase continues to establish itself as a leader in empowering finance leaders to operate with greater clarity and confidence, according to The American Bazaar.

India Showcases Technological Innovations at AI Impact Summit 2026

India is hosting the AI Impact Summit 2026, gathering global tech leaders to explore the transformative potential of artificial intelligence across economies, governance, and society.

As artificial intelligence (AI) approaches a pivotal role in reshaping human civilization, India is welcoming a summit of global tech leaders to discuss its implications for economies, governance, and society. The five-day Artificial Intelligence Impact Summit 2026 commenced on Monday evening, with Prime Minister Narendra Modi inaugurating the India AI Impact Expo 2026 at Bharat Mandapam, the summit venue in New Delhi.

In a post on X, Modi emphasized the significance of the summit, stating, “This is proof that our nation is making rapid progress in the fields of science and technology and is contributing significantly to global development.” He further highlighted the potential and capabilities of India’s youth, underscoring the nation’s commitment to harnessing AI for human-centric progress.

The theme of the summit, ‘Sarvajana Hitaya, Sarvajana Sukhaya,’ translates to “welfare for all, happiness for all,” reflecting India’s dedication to utilizing AI for the benefit of all citizens. The first day featured a leadership session focused on harnessing AI for the future of learning and work, examining how AI is reshaping global employment and redefining necessary skills.

Another significant session addressed the transformation of India’s judicial ecosystem through AI. Experts discussed the technology’s potential to enhance efficiency, transparency, and accessibility within the judicial system. Additionally, the summit included discussions on culturally grounded AI and social norms, emphasizing that AI systems often fail not due to technical limitations but because they overlook essential social contexts.

The future of employability in the age of AI is a central theme, with experts exploring how AI may create new job opportunities while rendering some existing roles obsolete, necessitating large-scale workforce reskilling. A special session titled “Artificial Intelligence for Smart and Resilient Agriculture – From Research to Solutions” aimed to gather diverse perspectives on how AI can support sustainable, efficient, and climate-resilient agricultural practices.

This summit is notable as the first global AI summit of its kind to take place in the Global South. It aims to foster a future where AI’s transformative impact serves humanity, drives inclusive growth, and promotes people-centric innovations to protect the planet.

The groundwork for the summit included five rounds of public consultations and global outreach sessions held in cities such as Paris, Berlin, Oslo, New York, Geneva, Bangkok, and Tokyo. The summit is anchored in three guiding principles: the Sutras of People, Planet, and Progress, which frame how AI should serve humanity, safeguard the environment, and promote inclusive growth.

Prior to the New Delhi summit, a strategic pre-summit gathering took place in Washington, D.C., where policymakers, technologists, diplomats, and founders convened to discuss “Co-Creating the Future: Global South–Global North Collaboration for AI Impact.” This gathering reinforced the notion that AI discussions can no longer be geographically concentrated.

The New Delhi Summit aims to chart a path toward a future where AI’s transformative power serves humanity, fosters social development, and promotes innovations that protect the planet. It also seeks to amplify the voice of the Global South, ensuring that technological advancements and opportunities are shared broadly rather than concentrated in a few regions.

However, the rapid proliferation of AI across society presents urgent challenges, including disruptions to traditional employment patterns, exacerbation of biases, and increased energy consumption. These developments underscore the need to move beyond aspirational frameworks and deliver measurable, concrete impacts that address both the promises and perils of AI.

OpenAI CEO Sam Altman, ahead of the summit, noted India’s tech talent, national strategy, and optimism about AI’s potential, stating that the country possesses “all the ingredients to be a full-stack AI leader.” In an article for The Times of India, he outlined three priorities for collaboration: scaling AI literacy, building computing and energy infrastructure, and integrating AI into real workflows.

Altman expressed OpenAI’s commitment to partnering with the Indian government to make AI and its benefits accessible to more people across the country. “AI will help define India’s future, and India will help define AI’s future. And it will do so in a way only a democracy can,” he wrote.

The AI Impact Summit 2026 represents a significant milestone in the global conversation surrounding artificial intelligence, highlighting India’s role as a leader in the technology’s development and implementation.

According to The American Bazaar, the summit is set to pave the way for a future where AI’s transformative capabilities are harnessed for the greater good.

Android Malware Disguised as Fake Antivirus App Targets Users

Cybersecurity experts warn that a fake antivirus app named TrustBastion is using Hugging Face to distribute Android malware that can steal sensitive information from users’ devices.

Android users should be on high alert as cybersecurity researchers have identified a new threat involving a fake antivirus application called TrustBastion. This malicious app exploits Hugging Face, a widely used platform for sharing artificial intelligence (AI) tools, to deliver dangerous malware that can capture screenshots, steal personal identification numbers (PINs), and display fraudulent login screens.

The TrustBastion app initially presents itself as a helpful security tool, claiming to offer virus protection, phishing defense, and malware blocking. However, once installed, it quickly reveals its true nature. The app falsely alerts users that their device is infected, prompting them to install an update that actually delivers the malware. This tactic, known as scareware, preys on users’ fears and encourages them to act without thinking.

According to Bitdefender, a global cybersecurity firm, the campaign surrounding TrustBastion is particularly concerning due to its deceptive nature. Victims are often misled by ads or warnings suggesting their devices are compromised, leading them to manually download the app. The attackers cleverly hosted TrustBastion’s APK files on Hugging Face, embedding them within seemingly legitimate public datasets, which allowed the malicious code to go unnoticed.

Once installed, TrustBastion immediately prompts users to download a “required update,” which is when the actual malware is introduced. Despite researchers reporting the malicious repository, Bitdefender noted that similar repositories quickly reemerged, often with minor cosmetic changes but maintaining the same harmful functionality. This rapid re-creation complicates efforts to fully eliminate the threat.

The malware associated with TrustBastion is invasive and poses significant risks. Bitdefender reports that it can take screenshots, display fake login screens for financial services, and capture users’ lock screen PINs. The stolen data is then transmitted to a third-party server, allowing attackers to drain bank accounts or lock users out of their devices.

Google has reassured users that those who stick to official app stores are generally protected against this type of malware. A Google spokesperson stated, “Based on our current detection, no apps containing this malware are found on Google Play.” Google Play Protect, which is enabled by default on Android devices with Google Play Services, helps safeguard users by warning them about or blocking apps known to exhibit malicious behavior, even if they originate from outside the Play Store.

This incident serves as a stark reminder of the importance of cautious app downloading practices. Users are advised to only download applications from reputable sources, such as the Google Play Store or the Samsung Galaxy Store, which have moderation and scanning processes in place. It is also crucial to scrutinize app ratings, download counts, and recent reviews, as fake security apps often garner vague feedback or experience sudden rating spikes.

Even the most vigilant users can fall victim to data exposure. Utilizing a data removal service can help eliminate personal information, such as phone numbers and email addresses, from data broker sites that criminals exploit. While no service can guarantee complete data removal from the internet, these services actively monitor and systematically erase personal information from numerous websites, providing peace of mind and reducing the risk of follow-up scams and account takeovers.

To further enhance security, users should regularly scan their devices with Google Play Protect and consider backing up their protection with robust antivirus software. Although Google Play Protect automatically removes known malware, it is not infallible. Historically, it has not always been 100% effective in eliminating all malware from Android devices.

To safeguard against malicious links that could install malware and compromise personal information, users should ensure they have strong antivirus software installed across all devices. This software can also help detect phishing emails and ransomware, protecting personal information and digital assets.

Additionally, users should avoid installing apps from websites outside of official app stores, as these apps bypass essential security checks. It is vital to verify the publisher name and URL before downloading any application. Enabling two-step verification (2FA) and using strong, unique passwords stored in a password manager can also help prevent account takeovers.

Finally, users should remain cautious about granting accessibility permissions, as malware often exploits these to gain control over devices. This incident illustrates how quickly trust can be weaponized, with a platform designed for advancing AI research being repurposed to distribute malware. A fake antivirus app has become the very threat it claims to protect against, underscoring the need for users to scrutinize even seemingly trustworthy applications.

For those who have encountered suspicious activity on their devices, sharing experiences can help raise awareness. Users are encouraged to report their findings and concerns to relevant platforms.

According to Bitdefender, staying informed and cautious is the best defense against evolving cyber threats.

Astronauts Arrive at ISS for Eight-Month Mission Following Medical Emergency

Four astronauts arrived at the International Space Station for an eight-month mission, following an early evacuation due to a medical emergency last month.

Four new astronauts arrived at the International Space Station (ISS) on Saturday, restoring the lab to full capacity after a medical emergency forced an early evacuation of several crew members last month. The international crew, which includes NASA Commander Jessica Meir, launched from Cape Canaveral in a SpaceX rocket on Friday, embarking on a journey that lasted approximately 34 hours.

“That was quite the ride,” Meir remarked shortly after the launch, as reported by BBC News. “We have left the Earth, but the Earth has not left us.” The launch had faced delays due to weather concerns prior to takeoff.

Joining Meir for the next eight to nine months aboard the ISS are NASA astronaut Jack Hathaway, France’s Sophie Adenot, and Russian cosmonaut Andrei Fedyaev. Both Meir and Fedyaev have previous experience aboard the ISS, with Meir notably participating in the first all-female spacewalk in 2019. Adenot, a military helicopter pilot, is only the second French woman to travel to space, while Hathaway serves as a captain in the U.S. Navy.

NASA reported that the spacecraft is set to autonomously dock with the space station’s Harmony module at 3:15 p.m. CT on Saturday, traveling at a speed of 17,000 mph in Earth orbit. “What an absolutely wonderful start to the day,” said NASA Administrator Jared Isaacman following the launch. “This mission has shown in many ways what it means to be mission-focused at NASA.”

Isaacman also highlighted the recent adjustments made by NASA, including the early return of Crew-11 and the expedited launch of Crew-12, all while preparing for the upcoming Artemis 2 mission, which is scheduled to begin in early March.

This mission marks the 12th crew rotation with SpaceX as part of NASA’s Commercial Crew Program. Crew-12 will engage in scientific investigations and technology demonstrations aimed at preparing humans for future exploration missions to the Moon and Mars, as well as providing benefits for people on Earth.

After docking, the capsule’s hatch opened at 4:14 p.m. CT, allowing the crew to enter the space station. “We are so excited to be here and get to work,” Meir expressed upon arrival. Adenot added, “The first time we looked at the Earth was mind-blowing. … We saw no lines, no borders.”

Prior to the arrival of the new crew, only one American and two Russians remained at the space station, ensuring its continued operation. The medical evacuation that took place in January was the first of its kind in 65 years, as NASA reported that a crew member experienced a serious health issue. The agency has not disclosed the nature of the medical condition or the identity of the astronaut involved, citing medical privacy.

The astronaut who faced the medical emergency, along with three other crew members who had launched with them, returned to Earth more than a month earlier than planned after the decision was made to bring them home.

According to the Associated Press, the successful arrival of the new crew marks a significant step forward for ongoing research and exploration efforts aboard the ISS.

Superhealth Launches SuperOS, Claims First Agentic AI Hospital

Superhealth has introduced SuperOS, touted as the world’s first agentic AI operating system designed to manage hospital operations entirely, marking a significant advancement in healthcare automation in India.

Superhealth has launched what it claims to be the world’s first agentic AI operating system, named SuperOS, designed to manage a hospital from end to end. This initiative positions India as a potential leader in large-scale healthcare automation.

SuperOS is crafted as a comprehensive system that integrates nearly every aspect of hospital operations. According to the company, it encompasses everything from outpatient consultations and diagnostics to surgical workflows and discharge summaries. Varun Dubey, the founder of Superhealth, emphasized the platform’s capabilities, stating, “SuperOS is the world’s first agentic AI operating system built to actually run a hospital, from clinical decisions to operations, from labs to discharge, from OT assignments to auto prescriptions, it does it all.”

Dubey further explained that SuperOS understands the needs of doctors, nurses, and patients, as well as 15 Indian languages. The system orchestrates outcomes by facilitating real-time interactions between human staff and AI agents. “Only Superhealth could build this, because we are the only full-stack provider that designs, builds, and operates hospitals while also developing all the technology that runs them,” he added. “This is not software that merely assists healthcare. This is technology that operates healthcare.”

The introduction of SuperOS places Superhealth in the midst of global discussions about integrating AI into hospital systems. While many healthcare facilities are exploring AI tools for specific tasks, Superhealth is marketing SuperOS as a unified operating layer that connects clinical and administrative functions in real time.

According to the company, SuperOS serves as an intelligent framework across the hospital, coordinating tasks between AI agents and human teams. In outpatient departments, it acts as an ambient clinical co-pilot, providing patient history, assisting with differential diagnoses, drafting prescriptions for physician approval, and coordinating with lab technicians and pharmacists directly in the consultation room. The aim is to reduce wait times and enhance meaningful interactions between doctors and patients.

SuperOS is also integrated into radiology and pathology workflows. The platform replaces traditional Picture Archiving and Communication Systems (PACS) with cloud-based imaging systems and employs instant 3D volumetric analysis to aid in the detection of conditions in neurology, orthopaedics, chest trauma, and oncology. Superhealth claims that this integration reduces reporting time by 30 percent and effectively triples the capacity of specialists.

For inpatient and surgical care, SuperOS coordinates operating rooms, surgeons, and recovery workflows. It continuously monitors patients in both regular and intensive care units, utilizing personalized alerts, automating discharge summaries through a feature dubbed “Magic Discharge,” and conducting real-time audits of all clinical interactions to enhance medical quality.

Dubey framed the launch of SuperOS as part of a broader national ambition, stating, “India has a unique opportunity to show the world what real, meaningful healthcare AI looks like. SuperOS is built in India, for India, using Indian clinical data. It is also deployed in India and is focused on solving problems that matter to our country and our people.”

Superhealth is working to establish a network of 100 hospitals, supported by full-time senior clinicians, advanced infrastructure, and a zero-commission business model aimed at transparency and simplicity. Central to this expansion is SuperOS, which the company describes as operating seamlessly alongside healthcare professionals while enhancing efficiency across consultations, diagnostics, surgery, pharmacy, and recovery.

As hospitals worldwide face challenges such as staffing shortages, rising costs, and burnout, Superhealth is making a bold assertion that an AI-native operating system can transition from merely assisting care to actively managing it. The scalability of this model beyond India will be closely monitored by healthcare systems in the United States and other countries.

According to The American Bazaar, the implications of SuperOS could reshape the landscape of hospital management and patient care, setting a precedent for future innovations in healthcare technology.

Instagram Chief Defends App Design Amid Youth Mental Health Lawsuit

Adam Mosseri, head of Instagram, testified in a California trial addressing the platform’s impact on youth mental health, defending its design against claims of addiction and negligence.

Adam Mosseri, the head of Instagram, took the witness stand on Wednesday in a pivotal trial in Los Angeles that could significantly influence how Silicon Valley addresses the mental health of its youngest users.

During his testimony, Mosseri defended Instagram against allegations that the platform was intentionally designed to be addictive, particularly among young users, contributing to a mental health crisis among adolescents. The case was brought forth by a 20-year-old woman from California, identified as Kayle, who argued that the app’s “endless scroll” feature and instant gratification elements led to years of depression and body dysmorphia from an early age.

In response to the term “addiction,” Mosseri reframed the discussion, describing it as “problematic use” that varies from individual to individual. He also addressed internal communications from 2019 concerning face-altering “plastic surgery” filters. While some teams within the company raised concerns that these tools could harm the self-esteem of teenage girls, Mosseri and Meta CEO Mark Zuckerberg initially considered lifting a ban on such filters to promote user growth. Ultimately, the company decided to maintain the ban on filters that overtly promote cosmetic surgery.

“I was trying to balance all the different considerations,” Mosseri told the jury, according to reports from the courtroom.

Several parents who have lost children to the adverse effects of social media were present in the courtroom, sharing their grief as part of the ongoing case. Victoria Hinks, whose daughter died by suicide at the age of 16, stated that their children had become “collateral damage” in Silicon Valley’s “move fast and break things” culture. Outside the courthouse, she remarked, “Our children were the first guinea pigs,” a sentiment that Mosseri countered during his testimony by asserting that the “move fast and break things” motto, originally coined by Zuckerberg, is no longer applicable.

The plaintiff’s attorney, Mark Lanier, argued that the platform operates like a “slot machine in a child’s pocket,” designed to exploit developing brains for profit. He contended that Meta was aware of the psychological toll its platform could take but prioritized user engagement over the well-being of its young audience.

This trial serves as a critical “bellwether” for over 1,500 similar lawsuits filed across the country. It also tests the boundaries of Section 230, the federal law that typically protects platforms from liability for user-generated content. If the jury finds Meta negligent in its product design, it could lead to significant financial repercussions and compel substantial changes to social media algorithms.

Meta maintains that it has implemented numerous safety features for teens, including parental controls and time limits. Zuckerberg is expected to testify later this month as the trial continues to explore the complex relationship between technology profits and the vulnerability of the teenage mind, according to American Bazaar.

Back-to-Back Founder Exits Shake Elon Musk’s xAI Team

Elon Musk’s xAI is facing significant leadership changes as two co-founders recently departed, raising concerns about the company’s stability amid ambitious plans and regulatory scrutiny.

Elon Musk’s xAI is currently navigating a challenging period, marked by the recent departures of two co-founders within just two days. This leadership churn comes at a time when expectations for the company are exceptionally high, as Musk continues to promote bold ambitions for the future of artificial intelligence.

In the latest development, influential AI researcher Jimmy Ba announced his exit from xAI on Tuesday. In a post on X, Ba expressed gratitude for his early involvement, stating he was “grateful to have helped cofound at the start.” His departure follows that of fellow co-founder Tony Wu, who revealed his resignation just one day earlier.

The timing of these resignations is particularly notable, as they occurred shortly after xAI was merged with Musk’s aerospace company, SpaceX, earlier this month. This merger is reportedly part of SpaceX’s preparations for a public listing later this year.

Ba, who is a professor at the University of Toronto, played a significant role in developing research that informed xAI’s Grok 4 models. His exit adds to a growing list of senior departures from the startup, which has now seen six of its original twelve founders leave, five of them within the past year.

Other co-founders, including Igor Babuschkin, Kyle Kosic, and Christian Szegedy, have also exited the company. Additionally, Greg Yang announced last month that he would be scaling back his involvement to focus on his health, specifically dealing with Lyme disease.

The merger between xAI and SpaceX was structured as an all-stock transaction, valuing SpaceX at $1 trillion and xAI at $250 billion, according to documents cited by CNBC. Earlier, in March 2025, Musk utilized xAI in a separate all-stock deal to acquire his social media platform, X.

These leadership changes come amid increasing regulatory scrutiny for xAI in various regions, including Europe, Asia, and the United States. Investigations were initiated after xAI’s Grok chatbot and image generation tools were found to facilitate the large-scale creation and distribution of non-consensual explicit content, commonly referred to as deepfake pornography. This material included images of real individuals, including minors, raising alarms among regulators across multiple jurisdictions.

Musk founded xAI in 2023 with a team of 11 others, positioning the company as a competitor to OpenAI and Google in the rapidly evolving AI landscape. At its inception, xAI stated its mission was to “understand the true nature of the universe,” setting an ambitious tone for what Musk envisioned as a transformative venture.

In response to the recent departures, Musk quickly convened an all-hands meeting with xAI staff on Tuesday night. This meeting aimed to reset the narrative and outline a sweeping vision for the company’s future. According to reports from The New York Times, Musk told employees that xAI would eventually require a manufacturing base on the moon. He proposed the idea of building AI-powered satellites there and launching them into space using a massive catapult. “You have to go to the moon,” Musk stated, as reported by The New York Times.

Musk suggested that establishing a presence on the moon would provide xAI with access to computing capacity far exceeding that of its competitors. He implied that such advancements could unlock forms of intelligence that are currently difficult to conceptualize. “It’s difficult to imagine what an intelligence of that scale would think about,” he added, “but it’s going to be incredibly exciting to see it happen.”

As the company grapples with these leadership changes, Musk appears determined to refocus attention on xAI’s ambitious goals, including the potential for a public listing. The recent exits of key figures underscore the challenges facing the company, but Musk’s vision for the future remains steadfast.

According to The New York Times, the ongoing developments at xAI highlight the complexities of managing a rapidly evolving tech startup in an increasingly scrutinized industry.

NASA Finalizes Strategy for Sustaining Human Presence in Space

NASA has finalized its strategy for maintaining a human presence in space, focusing on the transition from the International Space Station to future commercial platforms by 2030.

This week, NASA announced the completion of its strategy aimed at sustaining a human presence in space, particularly in light of the planned de-orbiting of the International Space Station (ISS) in 2030. The agency’s document underscores the necessity of ensuring extended stays in orbit following the retirement of the ISS.

“NASA’s Low Earth Orbit Microgravity Strategy will guide the agency toward the next generation of continuous human presence in orbit, enable greater economic growth, and maintain international partnerships,” the document states.

The commitment to this strategy comes amid concerns regarding the readiness of new space stations. With the incoming administration’s focus on budget cuts through the Department of Government Efficiency, there are apprehensions that NASA may face funding reductions.

“Just like everybody has to make hard decisions when the budget is tight, we’ve made some choices over the last year to cut back programs or cancel them altogether to ensure that we’re focused on our highest priorities,” said NASA Deputy Administrator Pam Melroy.

Commercial space company Voyager is actively developing one of the potential replacements for the ISS. The company has expressed support for NASA’s strategy to maintain a human presence in space. “We need that commitment because we have our investors saying, ‘Is the United States committed?’” stated Jeffrey Manber, Voyager’s president of international and space stations.

The initiative to maintain a permanent human presence in space dates back to President Reagan, who emphasized the importance of private partnerships in his 1984 State of the Union address. “America has always been greatest when we dared to be great. We can reach for greatness,” he said, highlighting the potential for the space transportation market to exceed national capabilities.

The ISS, which has been continuously occupied for 24 years, was launched in 1998 and has hosted over 28 astronauts from 23 countries. The Trump administration’s national space policy, released in 2020, called for a “continuous human presence in Earth orbit” and stressed the need to transition to commercial platforms—a policy that has been maintained by the Biden administration.

“Let’s say we didn’t have commercial stations that are ready to go. Technically, we could keep the space station going, but the idea was to fly it through 2030 and de-orbit it in 2031,” NASA Administrator Bill Nelson remarked in June.

Recent discussions have raised questions about the continuity of human presence in space. “I just want to talk about the elephant in the room for a moment, continuous human presence. What does that mean? Is it continuous heartbeat or continuous capability?” Melroy noted during the International Astronautical Congress in October.

NASA’s finalized strategy has taken into account the concerns of commercial and international partners regarding the potential loss of the ISS without a commercial station ready to take its place. “Almost all of our industry partners agreed. Continuous presence is continuous heartbeat. And so that’s where we stand,” Melroy explained. “I think this continuous presence, it’s leadership. Today, the United States leads in human spaceflight. The only other space station that will be in orbit when the ISS de-orbits, if we don’t bring a commercial destination up in time, will be the Chinese space station. We want to remain the partner of choice for our industry and for our goals for NASA.”

Three companies, including Voyager, are collaborating with NASA to develop commercial space stations. Axiom signed an agreement with NASA in 2020, while contracts were awarded to Nanoracks, now part of Voyager Space, and Blue Origin in 2021.

“We’ve had some challenges, to be perfectly honest with you. The budget caps that were a deal cut between the White House and Congress for fiscal years 2024 and 2025 have left us without as much investment,” Melroy acknowledged. “So, what we do is we co-invest with our commercial partners to do the development. I think we’re still able to make it happen before the end of 2030, though, to get a commercial space station up and running so that we have a continuous heartbeat of American astronauts on orbit.”

Voyager has stated that it is on track with its development timeline and plans to launch its starship space station in 2028. “We’re not asking for more money. We’re going ahead. We’re ready to replace the International Space Station,” Manber asserted. “Everyone knows SpaceX, but there are hundreds of companies that have created the space economy. If we lose permanent presence, you lose that supply chain.”

Additional funding has been allocated to the three companies since the initial space station contracts, and a second round of funding could be crucial for some projects. NASA may also consider funding new space station proposals, including concepts from Long Beach, California’s Vast Space, which recently unveiled plans for its Haven modules, aiming to launch Haven-1 as soon as next year.

“We absolutely think competition is critical. This is a development project. It’s challenging. It was hard to build the space station. We’re asking our commercial partners to step up and do this themselves with some help from us. We think it’s really important that we carry as many options going forward to see which one really pans out when we actually get there,” Melroy concluded.

According to Fox News, NASA’s strategy reflects a commitment to ensuring a sustainable human presence in space as the agency navigates the transition from the ISS to future commercial platforms.

Microsoft ‘Important Mail’ Email Scam: How to Identify It

Scammers are increasingly impersonating Microsoft, sending deceptive emails that threaten account access to trick victims into clicking malicious links.

Scammers are becoming more sophisticated in their tactics, particularly when it comes to impersonating reputable companies like Microsoft. Recently, a fraudulent email claiming to be an urgent warning about email account access has raised alarms among users.

The email appears serious and time-sensitive, which is a common strategy used by scammers to provoke immediate action. A concerned individual named Lily reached out for assistance, expressing uncertainty about the validity of the message she received. She attached screenshots of the email, hoping for guidance.

It is crucial to note that this email is not from Microsoft; it is a scam designed to rush individuals into clicking dangerous links. The urgency of the message is a red flag that should not be ignored.

Upon closer inspection, several warning signs indicate that the email is fraudulent. For instance, it begins with a generic greeting, “Dear User,” rather than addressing the recipient by name, which is a standard practice for legitimate Microsoft communications.

The email claims that the recipient’s email access will be suspended on February 5, 2026. Scammers often exploit fear and urgency to cloud judgment and prompt hasty decisions.

Additionally, the email originates from an AOL address (accountsettinghelp20@aol.com), which is another significant indicator of its illegitimacy. Microsoft does not send security notifications from AOL or any other third-party email service.

Another alarming feature of the email is the phrase “PROCEED HERE,” which is designed to incite quick clicks. Legitimate Microsoft communications will always direct users to clearly labeled Microsoft.com pages.

Moreover, the email contains phrases like “© 2026 All rights reserved,” which scammers often copy and paste to create a false sense of authenticity. Genuine Microsoft account alerts do not include image attachments, making this another major warning sign.

If a recipient were to click on the link provided in the email, they would likely be redirected to a counterfeit Microsoft login page. This is a tactic used by attackers to steal personal information, including email credentials, which can lead to further scams and identity theft.

To protect yourself from such scams, it is essential to take a cautious approach when encountering suspicious emails. Here are some steps to consider:

First, do not click on any links, buttons, or images in the email. Avoid replying to the message, and be cautious even when opening attachments, as they can trigger malware or tracking mechanisms.

Ensure that you have strong antivirus software installed and that it is up to date. This software can help block phishing attempts, scan attachments, and alert you to dangerous links before any damage occurs.

If you receive an email like this, report it and delete it from your inbox. There is no reason to keep it, even in your trash folder.

For peace of mind, open a new browser window and navigate directly to the official Microsoft account website. Sign in as you normally would; if there is a legitimate issue, it will be displayed there.

If you accidentally clicked on any links or entered your information, change your Microsoft password immediately. Use a strong, unique password that you do not use elsewhere. A password manager can help generate and securely store your passwords.

Additionally, check if your email has been exposed in previous data breaches. Some password managers include built-in breach scanners that can alert you if your email address or passwords have appeared in known leaks. If you find a match, change any reused passwords and secure those accounts with new, unique credentials.

Enabling two-factor authentication (2FA) for your Microsoft account adds an extra layer of security, making it more difficult for attackers to gain access even if they have your password.

Scammers often gather information about potential targets through data broker sites. Using a data removal service can help minimize the amount of personal information available online, reducing your vulnerability to phishing attempts.

While no service can guarantee complete removal of your data from the internet, a data removal service can effectively monitor and erase your personal information from numerous websites, providing peace of mind.

Utilize your email app’s built-in reporting tool to help train filters and protect other users from encountering the same scam.

When Microsoft genuinely needs your attention, the communication will look very different from these scams. Recognizing the contrast can make it easier to identify fraudulent messages.

Scammers rely on urgency to distract and manipulate individuals, especially when it comes to something as central to our lives as email. The good news is that taking a moment to pause and verify can make a significant difference.

Lily’s decision to seek help before acting was a wise move that could prevent identity theft and account takeovers. Remember, emails that threaten account shutdowns and demand immediate action are almost always illegitimate. When faced with urgency, take a step back, verify independently, and never let an email rush you into a mistake.

If you have encountered a fake Microsoft warning or a similar scam, share your experience with us at Cyberguy.com.

For more information on protecting yourself from scams, consider signing up for the free CyberGuy Report, which offers tech tips, urgent security alerts, and exclusive deals delivered directly to your inbox.

According to CyberGuy.com, staying informed and cautious is key to safeguarding your digital life.

Ring’s AI Search Party Aims to Locate Lost Dogs More Efficiently

Ring has launched its AI-powered Search Party feature nationwide, enabling users to leverage nearby cameras to quickly locate lost dogs, even if they do not own a Ring device.

Ring has expanded its AI-powered Search Party feature across the United States, allowing anyone to utilize nearby cameras to help locate lost dogs more efficiently.

Losing a dog can be a distressing experience, often leading to frantic searches around the neighborhood and constant refreshes of local social media groups in hopes of finding a clue. To alleviate some of this stress, Ring aims to transform entire communities into additional eyes through the power of artificial intelligence. The Search Party feature now enables users to tap into a network of outdoor cameras to spot missing pets, and for the first time, it is accessible to anyone, regardless of whether they own a Ring camera.

Search Party is designed as a community-driven tool that expedites the reunion of lost dogs with their families. When a user reports a missing dog in the Ring app, nearby outdoor Ring cameras utilize AI to scan recent footage for potential matches. If a possible match is identified, the camera owner receives an alert containing a photo of the lost dog and a video clip. They can then choose to either ignore the alert or assist in the search, ensuring that sharing remains optional and pressure is minimized.

This update marks a significant shift in the functionality of Search Party. Previously, only individuals with Ring devices could access this feature. Now, anyone in the U.S. can download the free Ring Neighbors app, register, and post a lost dog alert. This change allows dog owners to connect with an existing network of cameras without the need for additional hardware or subscription fees. Neighbors without cameras can also contribute by sharing alerts and keeping an eye out for sightings.

Lost pets are already one of the most common types of posts in the Ring Neighbors app, with over 1 million reports of lost or found pets shared last year. Given that approximately 60 million households in the U.S. own at least one dog, the potential impact of Search Party is substantial.

Getting started with Search Party is straightforward. Users can download the Ring app for free from the App Store or Google Play. Once registered, anyone can create a Lost Dog Post in the app. If the post meets the necessary criteria, the app guides users through the steps to activate Search Party. This process involves sharing photos and basic information about the missing dog, after which nearby cameras will begin scanning automatically.

Search Party alerts are temporary. When a user initiates a Search Party in the Ring app, it operates for a few hours. If the dog remains missing, the user must renew the Search Party or start a new one to ensure that nearby cameras continue their search for matches. Once the dog is found, users can update their post to inform the community that the search is over.

The AI technology behind Search Party aims to reunite lost dogs with their owners efficiently. If an outdoor Ring camera detects a potential match, the camera owner is notified with an alert that includes a photo of the missing dog and a video clip. The camera owner retains control throughout the process, deciding whether to share footage or contact the owner through the app, all while keeping their phone number private.

Ring reports that Search Party has already yielded impressive results. In one instance, a woman named Kylee from Wichita, Kansas, was reunited with her mixed-breed dog, Nyx, just 15 minutes after he escaped through a small hole in her backyard fence. A neighbor’s Ring camera captured footage of Nyx and shared it through the app, providing Kylee with her only lead. “I was blown away,” Kylee said, emphasizing that even dogs with microchips can go unrecognized if they lack a collar. She credits the shared video for Nyx’s swift return, stating that she likely would not have found him without the Ring app.

Nyx is not the only success story. Ring claims that Search Party has facilitated the reunion of more than one lost dog per day, including pets like Xochitl in Houston, Truffle in Bakersfield, Lainey in Surprise, Zola in Ellenwood, Toby in Las Vegas, Blu in Erlanger, Zeus in Chicago, and Coco in Stockton, with more reunions occurring daily.

Search Party remains an optional feature that users can enable or disable at any time within the Ring app. Alongside this expansion, Ring has committed $1 million to equip animal shelters with camera systems, aiming to support up to 4,000 shelters across the United States. By integrating shelters into the network, Ring hopes to facilitate faster reconnections between dogs picked up by shelters and their owners. The company is also collaborating with organizations like Petco Love and Best Friends Animal Society and is open to additional partnerships.

Despite its benefits, the launch of Search Party last fall faced some criticism, particularly regarding privacy concerns and Ring’s connections to law enforcement. Ring maintains that participation is voluntary and that sharing footage is optional. However, the feature is enabled by default for compatible outdoor cameras, which has raised eyebrows. Nevertheless, the company appears confident in its offering and is actively promoting Search Party, even featuring it in a Super Bowl commercial.

Search Party taps into a familiar concept of neighbors helping one another during a challenging time. By making this feature available to everyone, Ring has removed a significant barrier, increasing the likelihood of quick reunions. Whether this tool becomes a community staple or ignites further privacy discussions will depend on how it is utilized by the public.

Would you be comfortable with neighborhood cameras assisting in the search for your lost dog, or does that raise concerns about surveillance? Share your thoughts with us at Cyberguy.com.

According to Fox News, the Search Party feature represents a significant advancement in community-driven pet recovery efforts.

-+=