Concerns Over Privacy Clauses in Smart Home Devices

Smart home devices, including TVs and voice assistants, often contain privacy clauses that allow extensive data collection, raising concerns about user privacy and data security.

In today’s digital age, smart devices such as TVs, voice assistants, and connected cars have become integral to our daily lives. However, many users remain unaware of the extensive privacy clauses embedded in the terms of service for these devices. These clauses often permit significant data harvesting, behavioral tracking, and long-term storage of personal information. Some even allow companies to access recordings or share data with third parties.

The reality is that smart devices can create detailed profiles of our daily lives, tracking our schedules, habits, and even conversations. As one expert explains, “Your phone knows where you go. Your smart home knows what you do when you get there.” This commentary highlights the need for users to understand how their devices operate and the implications of their data collection practices.

Here are five surprising privacy clauses associated with common smart devices that many users may not know about.

First, consider connected vehicles. Modern cars are no longer just modes of transportation; they function as connected computers that gather vast amounts of telemetry data. Systems like Android Automotive OS can log numerous data points during regular driving, including speed and driving patterns. This data can be used to infer stops, turns, and even risky driving behaviors. Alarmingly, this information may also be shared with third parties for advertising, insurance, or financing purposes, creating a comprehensive picture of your driving habits.

Next, smart TVs are among the most active data collectors in our homes. Brands like Samsung, LG, and Roku utilize Automatic Content Recognition (ACR) technology, which analyzes what is displayed on the screen in real-time. This information is reported back to the company, and some policies even state that snippets of audio or video may be shared with third parties to tailor advertisements to viewers. This means that everything from your binge-watching habits to the time you spend on certain shows can be packaged and sold to advertisers.

Video doorbells, designed to enhance home security, also collect significant amounts of behavioral data. Devices like the Ring Video Doorbell automatically gather information such as geolocation data, IP addresses, and details about connected devices. Over time, these devices can create a timeline of your daily routine, revealing when you are home or away, and how your household operates. While these signals may seem innocuous individually, together they can provide a detailed blueprint of your life, especially if an account is compromised.

Voice assistants, such as Amazon Echo, are another area of concern. These devices process voice commands in the cloud, and according to company disclosures, voice interactions can be saved indefinitely unless users manually delete them. Over time, this can lead to an accumulation of years’ worth of audio interactions, including everything from grocery lists to personal conversations. Many users are unaware that these recordings may be reviewed by company personnel, raising serious privacy concerns.

Finally, it is essential to recognize that while each smart device collects only a portion of the overall picture, together they can reveal an astonishing amount of detail about your life. Privacy experts often refer to connected homes as “data multipliers,” as the combined data from various devices allows companies to create extremely detailed behavioral profiles. This data is often a crucial part of the business model for many tech companies, helping to offset the cost of the devices themselves.

Fortunately, there are steps you can take to mitigate the amount of information your devices collect. Begin by reviewing the access permissions of your apps. For instance, if you use smart home apps like Ring, check the in-app privacy settings and disable sharing with third parties where possible. On iPhones, set location access to “While Using the App” instead of “Always.” On Android devices, adjust location access to “Allow only while using the app” to limit background tracking.

Most smart TVs also have settings to control content tracking. For example, on Roku, navigate to Settings → Privacy → Smart TV Experience and disable it. On Samsung TVs, look for “Viewing Information Services” and turn it off. These adjustments can significantly reduce the amount of data collected.

Additionally, ensure that your smart home devices are secured with strong, unique passwords and enable two-factor authentication whenever possible. A password manager can assist in generating and storing secure passwords. Regularly check if your email has been exposed in past data breaches, and if so, change any reused passwords immediately.

Cleaning up digital clutter can also help reduce your data footprint. Take the time to remove unused apps that may still be accessing your camera, microphone, or location. On iPhones, you can delete apps through storage settings, while Android devices allow you to manage permissions by type, making it easier to see which apps access sensitive features.

Smart speakers, which are always on standby for wake words, can be muted or unplugged in private spaces to prevent unnecessary audio data collection. Many devices include a physical microphone mute button, and users can review and delete past interactions within companion apps.

While smart devices offer convenience and enhance our daily lives, they come with hidden trade-offs regarding privacy. Understanding what data your devices collect and adjusting settings accordingly can help you maintain a level of privacy that you are comfortable with. A quick privacy audit today can prevent years of unnecessary data collection in the future.

For a deeper exploration of how these hidden data practices affect your daily life, consider tuning into the latest episode of the Beyond Connected podcast. Understanding the implications of data collection is crucial in navigating the modern digital landscape, and being informed is the first step toward protecting your privacy.

As you reflect on your smart devices, consider this question: If every device in your home combined its data into a single timeline of your life, how comfortable would you feel with someone seeing it? For more insights and tips, visit CyberGuy.com.

Google Develops AI to Decode Dolphin Communication

Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate future interactions between humans and these intelligent marine mammals.

Google is embarking on an ambitious project to harness artificial intelligence (AI) in an effort to decode the complex communication of dolphins. The ultimate goal is to enable humans to converse with these intelligent creatures.

Dolphins have long been celebrated for their remarkable intelligence, emotional depth, and social interactions with humans. In collaboration with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP), a Florida-based non-profit dedicated to studying dolphin sounds for over 40 years, Google is developing a new AI model named DolphinGemma.

The WDP has spent decades correlating specific dolphin sounds with various behavioral contexts. For example, signature whistles are often used by mothers to locate their calves, while burst pulse “squawks” are typically associated with aggressive encounters among dolphins. Additionally, “click” sounds are frequently observed during courtship or when dolphins are pursuing sharks.

Utilizing the extensive data collected by the WDP, Google has created DolphinGemma, which builds upon its existing lightweight AI model, Gemma. This new model is designed to analyze a vast library of dolphin vocalizations, identifying patterns, structures, and potential meanings behind these communications.

DolphinGemma aims to categorize dolphin sounds in a manner akin to words, sentences, or expressions in human language. By recognizing recurring sound patterns and reliable sequences, the model can assist researchers in uncovering the hidden structures and meanings within dolphin communication, a task that previously required significant human effort.

According to a blog post from Google, “Eventually, these patterns, augmented with synthetic sounds created by the researchers to refer to objects with which the dolphins like to play, may establish a shared vocabulary with the dolphins for interactive communication.”

The technology behind DolphinGemma leverages Google’s Pixel phone capabilities, specifically its advanced audio recording technology. This technology allows for high-quality sound recordings of dolphin vocalizations by effectively isolating dolphin clicks and whistles from background noise, such as waves, boat engines, or underwater static. Clean audio is essential for AI models like DolphinGemma, as noisy data can hinder the AI’s learning process.

Google plans to release DolphinGemma as an open model this summer, making it accessible for researchers worldwide to utilize and adapt for their own studies. Although the model is currently trained on Atlantic spotted dolphins, it has the potential to be fine-tuned for studying other species, such as bottlenose or spinner dolphins.

By providing tools like DolphinGemma, Google aims to empower researchers globally to explore their own acoustic datasets, accelerate the search for communication patterns, and collectively enhance our understanding of these intelligent marine mammals, according to the company’s blog.

AMD, Arm, and Qualcomm Invest in Self-Driving Startup Wayve

Advanced Micro Devices, Arm Holdings, and Qualcomm have invested $60 million in U.K.-based startup Wayve, enhancing its capabilities in autonomous driving and advanced driver-assistance systems.

A high-profile alliance of chipmakers is accelerating the race toward autonomous driving as Advanced Micro Devices (AMD), Arm Holdings, and Qualcomm invest millions into U.K.-based startup Wayve. This collaboration underscores the growing momentum behind AI-powered mobility.

According to TechCrunch, the three companies have collectively invested $60 million into Wayve as part of an extension to its $1.2 billion Series D funding round. This move signals a deepening confidence in advanced driver-assistance systems (ADAS) and automated driving platforms.

This investment highlights how semiconductor firms are increasingly shaping the future of transportation. By backing Wayve, these companies position themselves at the core of AI-driven vehicle systems, where computing power and efficient chip design are critical for enabling real-time decision-making in autonomous environments.

Wayve has attracted attention for its unique approach to automated driving, relying heavily on embodied AI and machine learning rather than traditional rule-based systems. Its platform is designed to scale across various vehicle types while improving through continuous data learning, a capability that aligns closely with next-generation ADAS development.

The involvement of AMD, Arm, and Qualcomm reflects a strategic convergence of hardware and software ecosystems. AMD brings high-performance computing strength, Arm contributes energy-efficient chip architectures widely used in automotive systems, and Qualcomm adds expertise in AI, connectivity, and in-vehicle platforms.

Beyond capital, the partnership suggests broader strategic implications. Industry analysts view this move as a precursor to deeper collaboration or even potential merger activity, as chipmakers seek tighter integration with autonomous driving software providers.

The timing of this investment is notable. Automakers are rapidly transitioning toward software-defined vehicles, which increases the demand for scalable, AI-driven solutions. Investments like this one could help bridge the gap between today’s ADAS capabilities and fully autonomous driving.

Wayve’s growing backing also places it among a new generation of startups challenging established players in the autonomy space. As competition intensifies, alliances between chipmakers and AI startups may determine which platforms emerge as industry standards.

In the evolving mobility landscape, this investment signals a clear shift: the future of driving will be defined as much by silicon and software as by the vehicles themselves, according to TechCrunch.

Pichai, Mamdani, Khanna, Mohan, and Kapoor Named to TIME100 List

Google CEO Sundar Pichai, chef Vikas Khanna, YouTube CEO Neal Mohan, New York City Mayor Zohran Mamdani, and Bollywood actor Ranbir Kapoor have been named to TIME magazine’s 2026 list of the 100 Most Influential People.

NEW YORK, NY—In a prestigious recognition of their contributions to various fields, Google CEO Sundar Pichai, acclaimed chef Vikas Khanna, YouTube CEO Neal Mohan, New York City Mayor Zohran Mamdani, and Bollywood star Ranbir Kapoor have been named to TIME magazine’s 2026 list of the 100 Most Influential People.

Released on April 15, the annual TIME100 list celebrates individuals who have made significant impacts on culture, innovation, leadership, and public life. This year’s list also features prominent figures such as former President Donald Trump, Pope Leo XIV, Secretary of State Marco Rubio, Canadian Prime Minister Mark Carney, Chinese President Xi Jinping, Israeli Prime Minister Benjamin Netanyahu, and Artemis II commander Reid Wiseman.

Sundar Pichai was recognized for his pivotal role in expanding the reach of artificial intelligence through various products utilized globally. In his profile for TIME, Andrew Ng, founder of DeepLearning.AI and co-founder of Google Brain, highlighted Pichai’s leadership since becoming CEO in 2015, noting his ability to transform Google’s research breakthroughs into widely used tools. TIME emphasized that Pichai has maintained a startup-like agility at Google while advancing innovative AI products, including Google AI Studio, NotebookLM, Gemini CLI, and Antigravity.

Neal Mohan earned recognition for steering YouTube’s ongoing global growth. TIME described the platform as a hub for diverse content, from NFL games and podcasts to popular creators like MrBeast and CoComelon. Mohan’s blend of technical expertise, business acumen, and creator trust was underscored in his profile, where creator Michelle Khare remarked, “Approachability is one of Neal’s superpowers.”

Vikas Khanna was honored for his profound influence in food, culture, and humanitarian efforts. TIME noted that his work exemplifies how influence can manifest in various forms, including “a meal prepared by chef Vikas Khanna.” Chef Eric Ripert, co-owner of Le Bernardin and a James Beard Award winner, praised Khanna as “a man of extraordinary heart,” emphasizing his use of food as “a universal language to build bridges and foster understanding.” Khanna is also the founder of New York restaurant Bungalow, which TIME described as more than just a dining establishment, calling it “a living expression of storytelling,” where dishes reflect memory, heritage, and shared identity.

Zohran Mamdani received recognition for his political ascent in the United States. TIME noted that the New York mayor has provided the Democratic Party with “a new source of momentum.” Despite facing challenges related to housing policy, finances, and coalition politics, Mamdani has collaborated with New York Governor Kathy Hochul on childcare initiatives and successfully secured federal housing funds.

Ranbir Kapoor was acknowledged for his significant contributions to cinema and storytelling. In his profile for TIME, actor Ayushmann Khurrana remarked that while some actors pursue legacy, Kapoor has become one through his craft. Khurrana stated that Kapoor has enriched the emotional vocabulary of Indian cinema through restraint and authenticity, successfully bringing Indian narratives to international audiences.

TIME’s 2026 honorees are described as “changing culture in unprecedented ways,” reflecting the diverse forms of influence across various professions, generations, and countries. This year’s list showcases individuals who are not only leaders in their fields but also catalysts for change in society.

According to TIME, these influential figures are shaping the future in remarkable ways.

Why a Strong Password Isn’t Enough for Home Wi-Fi Security

A strong Wi-Fi password is insufficient for online privacy; utilizing a VPN is essential for encrypting connections and preventing ISP tracking.

While securing your home Wi-Fi with a strong password is a commendable first step, it is crucial to understand that a password alone does not guarantee your online privacy. Many individuals mistakenly believe that Wi-Fi security is solely about preventing unauthorized access to their network. Although this aspect is important, it represents only a fraction of the overall picture.

Even with a robust password in place, your internet activity can still be visible to various entities in ways you might not anticipate. A Wi-Fi password effectively locks the front door to your network, but it does not conceal what occurs within your connection.

When you connect to the internet at home, your internet service provider (ISP) can monitor a surprising amount of your online activities. This can include the websites you visit, the duration of your visits, and sometimes even more detailed information. Furthermore, it is not just your ISP that is observing your behavior; websites, applications, major tech companies, governments, and data brokers are continuously collecting information about your online activities, often without your knowledge.

To illustrate, think of your password as a barrier that keeps intruders out of your home. However, once your data leaves your residence, it can still be vulnerable during its journey across the internet. This is where a virtual private network (VPN) becomes essential.

A VPN establishes a secure, encrypted tunnel between your device and the internet. This means that your data is scrambled before it exits your home network, making it significantly more difficult for anyone to monitor your online activities. Additionally, connecting to a VPN server assigns you a new IP address, which helps obscure your online actions from being easily traced back to you. This added layer of anonymity makes it more challenging for advertisers, social networks, and potential scammers to build behavioral profiles that could be used for targeted phishing attacks.

Many VPN services are favored for their speed, user-friendliness, and comprehensive features. This is particularly important if you frequently use public Wi-Fi, where your data is even more exposed to potential threats.

In practical terms, most VPN services are straightforward to use. They provide applications for nearly every device, including options that can be configured directly on routers. These applications are typically easy to set up and operate, allowing users to connect with just a single click or tap. Once activated, a VPN can mask your IP address and encrypt your connection without compromising your internet speed. In fact, many users find that VPNs can enhance their connection speeds by preventing ISPs from throttling their bandwidth.

Setting up a VPN on your router ensures that every device in your home is automatically protected, including smart TVs, gaming consoles, and other connected gadgets. Moreover, many VPN providers now offer additional privacy tools that go beyond basic protection. These tools may include password managers, email protection, identity monitoring, and even private AI solutions designed to bolster your data security.

In summary, securing your home Wi-Fi is not merely about protecting your connection; it is about safeguarding your entire digital footprint. Your home network serves as the gateway to a multitude of online activities, including banking, shopping, work, and social interactions. Relying solely on a password is akin to locking your door while leaving your curtains wide open.

Integrating a VPN into your online routine provides an extra layer of privacy that operates seamlessly in the background, enhancing every aspect of your digital life. This approach not only prepares you for potential threats but also grants you peace of mind.

For those seeking the best VPN software, expert reviews are available at CyberGuy.com, detailing the top options for browsing privately on Windows, Mac, Android, and iOS devices.

Ultimately, while a strong password is a wise initial measure, it only protects access to your network, not the fate of your data once it leaves. Your internet activity traverses systems designed to track, analyze, and sometimes profit from it. By adding a VPN, you can regain control over your online privacy, encrypting your connection and limiting the visibility of your actions to others. This simple upgrade transforms basic security into genuine privacy without altering your everyday internet usage.

Where do you believe we should draw the line between connectivity and privacy? Share your thoughts with us at CyberGuy.com.

According to CyberGuy.com.

Soviet-Era Spacecraft Returns to Earth After 53 Years in Orbit

Soviet spacecraft Kosmos 482 reentered Earth’s atmosphere on Saturday after 53 years in orbit following a failed mission to Venus.

A Soviet-era spacecraft made its return to Earth on Saturday, marking the end of a 53-year journey that began with a failed attempt to reach Venus. The spacecraft, known as Kosmos 482, was confirmed to have reentered Earth’s atmosphere by the European Union Space Surveillance and Tracking, which analyzed its trajectory and noted its absence from subsequent orbits.

The European Space Agency’s space debris office corroborated the reentry, indicating that the spacecraft failed to appear on radar at a German station. While the exact location of its descent remains unknown, experts had warned that some, if not all, of the half-ton spacecraft could survive the fiery reentry, as it was designed to endure the extreme conditions of a landing on Venus, the hottest planet in the solar system.

Scientists assessed the risks associated with the reentry, noting that the likelihood of anyone being struck by debris from the spacecraft was exceedingly low. Launched in 1972 by the Soviet Union, Kosmos 482 was part of a series of missions aimed at Venus. However, a rocket malfunction prevented this particular spacecraft from escaping Earth’s orbit, leaving it stranded for over five decades.

Much of Kosmos 482 had already reentered Earth’s atmosphere within a decade of its failed launch. The spherical lander, which measures approximately 3 feet (1 meter) in diameter and weighs over 1,000 pounds (495 kilograms), was the last remaining component of the spacecraft to descend. Experts noted that the lander was encased in titanium, contributing to its durability during reentry.

As the spacecraft spiraled downward, scientists and military experts were unable to predict the precise timing or location of its reentry. The uncertainty was compounded by solar activity and the deteriorating condition of the spacecraft after so many years in orbit.

As of Saturday morning, the U.S. Space Command had not yet confirmed the spacecraft’s demise, as it was still collecting and analyzing data from orbit. The U.S. Space Command routinely monitors dozens of reentries each month, but Kosmos 482 garnered additional attention from both government and private space trackers due to its likelihood of surviving reentry.

Unlike many other pieces of space debris, Kosmos 482 was coming in uncontrolled, with no intervention from flight controllers. Typically, these controllers aim to direct old satellites and debris toward vast expanses of water, such as the Pacific Ocean, to minimize the risk to populated areas.

The reentry of Kosmos 482 serves as a reminder of the long-lasting legacy of space exploration and the challenges that come with tracking and managing space debris. As technology advances, the monitoring of such objects will become increasingly critical to ensure the safety of both space missions and those on the ground.

According to Fox News, the reentry of Kosmos 482 highlights the ongoing need for vigilance in tracking space debris and understanding its potential impacts.

YouTube Adjusts Livestream Ads to Enhance Viewer Engagement

YouTube is revamping its livestream advertising strategy by pausing ads during peak engagement moments to enhance viewer experience and promote long-term monetization.

YouTube is making significant changes to its livestream advertising strategy, introducing a new approach that pauses ads during critical engagement moments. This initiative aims to enhance viewer experience while simultaneously strengthening long-term monetization efforts.

The decision addresses one of the major frustrations associated with live content: interruptions during vital or highly interactive segments. As livestreaming continues to gain prominence across various domains—from gaming to real-time news—YouTube is reassessing how its advertising model integrates with these shared digital experiences.

This shift is part of a broader evolution in digital advertising, as platforms increasingly recognize that poorly timed ads can disrupt not only viewing but also community interaction. Such interactions are essential to the modern livestream culture, where high-energy chats, spontaneous creator reactions, and collective audience participation are integral to the appeal of YouTube livestreams.

According to TechCrunch, the new system will automatically detect surges in live chat activity and pause ads for all viewers during these peak moments. In a blog post, YouTube stated its goal is to “protect that collective vibe,” reflecting a strategic shift that prioritizes communal viewing experiences and uninterrupted engagement as key drivers of long-term platform loyalty and creator success.

The update also introduces incentives linked to fan participation. When viewers purchase features such as Super Chat or Super Stickers—tools that highlight messages during streams—they will receive a temporary ad-free window immediately afterward. This approach reinforces a growing trend within the YouTube revenue model that combines advertising with direct fan support.

Historically, avoiding ads on YouTube has largely required a paid subscription, such as YouTube Premium. In contrast, this new strategy redistributes when ads appear rather than eliminating them altogether. Ads will still be present but will run during quieter moments when engagement is lower and viewers are less likely to disengage.

In addition to these changes, YouTube is expanding its monetization tools. The company has recently rolled out global access to virtual gifting across multiple countries and introduced features like simultaneous vertical and horizontal streaming formats. These updates aim to help creators reach audiences across various devices, including connected TVs, which accounted for over 30% of U.S. live watch time in 2025.

This announcement follows YouTube’s recent decision to raise subscription prices for its Premium service in the United States, highlighting the platform’s ongoing effort to balance ad revenue with alternative income streams.

Ultimately, YouTube’s latest changes signal a recalibration of its advertising strategy—one that treats viewer attention as a valuable, limited resource. By protecting peak moments instead of interrupting them, the platform is betting that a better experience today will translate into stronger engagement and revenue over time.

The post YouTube tweaks livestream ads to boost engagement appeared first on The American Bazaar.

AI Technology Increasingly Used in Cyberattacks, Microsoft Warns

Microsoft’s latest report reveals that cybercriminals are increasingly leveraging artificial intelligence to enhance their attack strategies, making cyberattacks faster and more accessible.

Microsoft Threat Intelligence has issued a stark warning regarding the evolving landscape of cybercrime, highlighting that cybercriminals are now utilizing artificial intelligence (AI) at nearly every stage of a cyberattack. This advancement enables attackers to operate more swiftly, scale their operations, and reduce the technical expertise required to execute their schemes.

While AI was initially heralded for its potential to streamline tasks such as email writing, software development, and data analysis, it has also caught the attention of malicious actors. The new report from Microsoft indicates that AI has become an invaluable tool for hackers, enhancing their capabilities rather than replacing them. In essence, AI serves as a powerful assistant, facilitating various aspects of cybercrime.

Cyberattacks typically involve multiple steps, including victim reconnaissance, crafting phishing messages, building infrastructure, and writing malicious code. Microsoft researchers note that generative AI tools are now expediting many of these processes. Tasks that once required hours or days can now be completed in mere minutes, allowing attackers to transition more quickly between different phases of an attack. Microsoft characterizes AI as a “force multiplier” that diminishes the barriers for attackers while they maintain control over their targets and strategies.

Some of the most sophisticated cybercriminal organizations are already experimenting with AI technologies. For instance, North Korean hacking groups, identified as Jasper Sleet and Coral Sleet, have integrated AI into their operations. One particularly concerning tactic involves creating fake remote worker profiles. Attackers use AI to generate realistic identities, resumes, and communications, applying for jobs at legitimate companies. Once hired, they gain unauthorized access to internal systems.

AI’s capabilities extend to generating culturally appropriate names and email formats that align with specific identities. This allows attackers to create convincing fake employee profiles, which can provide invaluable access once they infiltrate a company.

Researchers have also observed cybercriminals employing AI coding tools to assist in malware development. Generative AI can help attackers by dynamically generating scripts or altering malware behavior while it is running. Additionally, AI can be used to create phishing websites or facilitate attacks on infrastructure more efficiently. Microsoft has documented instances where AI was utilized to generate fake company websites that support social engineering efforts.

Despite the potential for misuse, AI companies have implemented safeguards to prevent their systems from being exploited. However, attackers are already devising methods to circumvent these protections, a tactic known as jailbreaking. This involves manipulating prompts to prompt AI systems to produce content they would typically refuse to generate. Researchers are also monitoring early experiments with agentic AI, which can autonomously perform tasks and adapt based on outcomes.

Currently, Microsoft emphasizes that AI primarily assists human operators rather than executing attacks independently. However, the rapid evolution of this technology raises concerns. One of the most significant issues highlighted in the report is the increasing accessibility of sophisticated cyberattack tools. In the past, launching complex cyberattacks required advanced technical skills. Now, AI tools can automate parts of this process, enabling individuals with limited programming knowledge to generate scripts, troubleshoot code, or translate scams into multiple languages.

This shift could potentially broaden the pool of individuals capable of launching cyberattacks. Conversely, AI also equips defenders with new tools for threat detection. Security teams are now leveraging AI to analyze behaviors, identify anomalies, and respond to attacks more swiftly. This development is fueling an ongoing cybersecurity arms race.

Microsoft’s security teams are actively working to detect and disrupt AI-enabled cybercrime as it emerges. The company employs threat intelligence systems to monitor attacker activities, identify new tactics, and share insights with organizations worldwide. Furthermore, Microsoft integrates AI into its security tools to enhance the detection of suspicious behaviors, phishing campaigns, and unusual account activities. These systems analyze patterns across billions of signals daily to identify threats before they can proliferate.

Organizations are advised to bolster their identity protections, monitor for unusual credential usage, and treat suspicious remote worker activities as potential insider threats. While the rise of AI-powered cyberattacks may seem daunting, many established security practices remain effective. Simple measures can significantly reduce risk.

As AI-generated phishing emails become increasingly sophisticated, it is crucial to verify any requests for passwords, payments, or sensitive information before clicking links or downloading files. Utilizing robust antivirus protection across all devices is essential, as strong antivirus software can detect malware, block suspicious downloads, and alert users to dangerous websites before they load.

Employing a password manager can help generate and securely store complex passwords for each account, preventing unauthorized access if one password is compromised. Additionally, multi-factor authentication provides an extra layer of security, thwarting many account takeovers even if a password is stolen. Regularly updating software to patch vulnerabilities is also critical; enabling automatic updates can help mitigate risks.

Cybercriminals often gather personal information from data broker sites before launching scams. Utilizing a data removal service can help minimize the amount of personal information available online, reducing the likelihood of falling victim to attacks.

Be vigilant for unexpected login alerts, password reset messages, or unfamiliar devices connected to your accounts, as these may indicate a breach. Prompt action is necessary if anything appears suspicious.

As artificial intelligence continues to transform various industries, the realm of cybercrime is no exception. Hackers are now employing AI to craft phishing messages, develop malware, and execute attacks more rapidly than ever before. This technology lowers technical barriers and accelerates operations while human attackers maintain control. Security experts anticipate that the use of AI in cyberattacks will only increase as tools become more powerful and widely accessible. Consequently, awareness and strong digital habits are more critical than ever, as the next phishing email you receive may not have been penned by a human at all.

With AI enabling hackers to launch attacks more swiftly and on a larger scale, the pressing question remains: are tech companies moving quickly enough to protect users? For further insights, visit CyberGuy.com.

Researchers Identify Source of Black Hole’s 3,000-Light-Year Jet Stream

A recent study has linked the supermassive black hole M87 to its vast 3,000-light-year cosmic jet, enhancing our understanding of how black holes launch particles at nearly light speed.

A groundbreaking study has successfully connected the renowned M87 black hole, the first black hole ever imaged, to its powerful cosmic jet. This research reveals how the black hole launches particles at nearly the speed of light.

Using significantly enhanced coverage from the global Event Horizon Telescope (EHT), scientists traced a 3,000-light-year-long cosmic jet streaming from M87 to its likely source point. The findings, published in the journal Astronomy & Astrophysics this week, could provide crucial insights into the origins and mechanics of the vast cosmic jets produced by black holes.

M87 is a supermassive black hole located in the Messier 87 galaxy, approximately 55 million light-years from Earth. It is estimated to be 6.5 billion times more massive than the Sun. The first image of M87 was unveiled to the public in 2019, following data collection by the Event Horizon Telescope in 2017.

Dr. Padi Boyd of NASA emphasized the significance of M87’s activity in a video discussing the black hole’s discovery. “Not only is the black hole supermassive, it’s also active,” she noted. “Just a few percent are active at any given time. Are they turning on and then turning off? That’s an idea… We know there are very high magnetic fields that launch a jet. This image is observational evidence that what we’ve been seeing for a while is actually being launched by a jet connected to that supermassive black hole at the center of M87.”

M87 not only consumes surrounding gas and dust but also emits powerful jets of charged particles from its poles, forming the jet stream. This duality highlights the complex nature of black holes, as they both attract and expel matter.

Saurabh, the team leader at the Max Planck Institute for Radio Astronomy, described the study as an important step toward bridging theoretical concepts about jet launching with direct observations. “Identifying where the jet may originate and how it connects to the black hole’s shadow adds a key piece to the puzzle and points toward a better understanding of how the central engine operates,” he stated.

The Event Horizon Telescope is a global network of eight radio observatories that work together to detect radio waves from astronomical objects, such as galaxies and black holes. This collaboration allows the EHT to function as an Earth-sized telescope, significantly enhancing its observational capabilities. The term “Event Horizon” refers to the boundary of a black hole beyond which light cannot escape, as defined by the National Science Foundation.

The findings were derived from data collected by the Event Horizon Telescope in 2021. However, the authors of the study acknowledged that while the results are robust under the assumptions and tests performed, definitive confirmation and more precise constraints will require future EHT observations. These future studies will need higher sensitivity, improved intermediate-baseline coverage through additional stations, and an expanded frequency range.

This research not only sheds light on the mechanics of black holes but also opens the door for further exploration into the enigmatic behavior of these cosmic giants. Understanding how black holes launch jets could have profound implications for our knowledge of the universe and the fundamental forces at play.

According to Space.com, the study represents a significant advancement in astrophysics, linking theoretical models with observable phenomena.

Indian-American Tech Leader Venkat Kavarthapu Appointed CEO of Symplr

Venkat Kavarthapu has been appointed CEO of symplr, marking a strategic shift towards AI-driven solutions in healthcare operations.

Enterprise healthcare operations leader symplr has announced the appointment of Venkat Kavarthapu as its new chief executive officer, a move that underscores the company’s commitment to integrating artificial intelligence into the medical sector.

Kavarthapu, who brings over 25 years of experience in the healthcare technology industry, succeeds Chris Colpitts, who served as interim CEO since November 2025. Colpitts will transition to the role of executive chairman of the board.

This leadership change comes at a crucial time for symplr, which provides essential administrative and operational software to nearly 90% of U.S. hospitals and over 400 health plans. The company aims to enhance its offerings through innovative AI solutions.

Having previously served as CEO of Edifecs, Kavarthapu has a strong background in scaling complex software systems. His tenure at Edifecs was marked by significant advancements in health data management platforms, culminating in the company’s acquisition by Cotiviti in 2025.

Kavarthapu’s journey in the American healthcare tech sector began in India. He earned a Bachelor of Engineering in Electronics and Communication Engineering from Osmania University in Hyderabad in 1993, followed by an MBA from the Indian Institute of Management Lucknow in 1996. These educational foundations provided him with the technical expertise and strategic insight necessary for his career, which began with a 12-year tenure at Wipro Technologies before he transitioned to the U.S. healthcare software industry.

Colpitts commended Kavarthapu’s ability to navigate the complexities of the modern healthcare landscape. “Venkat brings a strong combination of enterprise software knowledge and operational leadership,” Colpitts stated, emphasizing that Kavarthapu’s track record will be crucial in accelerating the company’s momentum.

In his new role, Kavarthapu plans to leverage artificial intelligence to address the “red tape” and administrative challenges that often burden healthcare providers and payers. His vision is to move beyond basic data management, ushering in a new era of “intelligent” software capable of predicting staffing needs and enhancing financial outcomes.

“I see a significant opportunity to harness AI to help healthcare organizations reduce operational complexity and improve the quality of care,” Kavarthapu remarked.

With backing from private equity firms Clearlake Capital Group and Charlesbank Capital Partners, symplr is positioning itself as a key player in the digital transformation of healthcare. Kavarthapu’s leadership is expected to enhance the integration of the company’s diverse product lines, which include workforce management and provider data, into a unified ecosystem.

As the healthcare industry increasingly embraces automation to combat burnout and rising costs, Kavarthapu’s appointment signals symplr’s intent to remain at the forefront of the digital health evolution.

According to The American Bazaar, this strategic shift reflects a broader trend in the healthcare sector towards leveraging technology for improved operational efficiency and patient care.

NASA’s Artemis Follow-Up Mission Approaches After Successful Lunar Flight

NASA is gearing up for its Artemis III mission, set to launch next year, which will focus on critical docking maneuvers in preparation for future lunar exploration.

NASA is setting its sights on the moon’s south pole as it prepares for the upcoming Artemis III mission, which aims to establish a future base on the lunar surface. This mission follows the successful Artemis II flight, which captivated audiences with stunning views and marked a significant milestone in lunar exploration.

Entry flight director Rick Henfling emphasized the agency’s forward momentum, stating, “The next mission’s right around the corner,” shortly after the Artemis II crew safely splashed down in the Pacific Ocean on Saturday. The excitement surrounding Artemis II has not waned, but NASA is already focused on the next chapter of its ambitious lunar program.

Scheduled for launch next year, Artemis III will see astronauts practicing critical docking maneuvers in Earth’s orbit. This mission is essential for testing the capabilities of the Orion capsule as it prepares to dock with a commercial lunar lander, a crucial step before any astronauts return to the moon.

Competition is heating up among private aerospace companies, with Elon Musk’s Starship and Jeff Bezos’ Blue Moon landers both vying to demonstrate their readiness for lunar missions. These companies are also in contention to support the Artemis IV mission, which is planned to be the first moon landing of the program in 2028.

NASA has already begun positioning key hardware for the upcoming docking test at Kennedy Space Center. Meanwhile, SpaceX is preparing for another Starship test flight, and Blue Origin is advancing toward its own lunar landing demonstration later this year.

The overarching goal of NASA and its partners extends beyond a single landing. The agency is targeting the moon’s south pole, an area believed to contain significant reserves of ice that could be utilized for water and fuel, essential for sustaining a future lunar base. This ambitious project is projected to cost between $20 billion and $30 billion.

As preparations for Artemis III continue, NASA is expected to announce the crew for the mission soon. The design of Artemis III is intended to mirror the testing protocols of the Apollo era, aiming to reduce risks before sending astronauts back to the lunar surface for the first time in over half a century.

According to The Associated Press, the Artemis program represents a significant leap forward in human space exploration, with the potential to pave the way for future missions to Mars and beyond.

How to Remove Personal Information from the Web Effectively

Removing personal information from data broker and people search sites can be challenging, but with the right strategies, you can regain control of your online privacy.

In an age where personal information is readily available online, many individuals struggle to remove their data from people search sites and data broker platforms. The process can be frustrating, especially when information reappears shortly after removal attempts. This recurring issue often discourages people from pursuing their privacy rights, but it is essential to understand that data brokers profit from your information and intentionally complicate the removal process.

Senator Maggie Hassan has recently highlighted the challenges posed by some data brokers, who obscure their opt-out pages, making it difficult for users to remove their personal information. However, with the right approach, you can take back control of your online privacy.

There are two primary methods to remove your personal information from the web: doing it yourself or utilizing a data removal service. While the latter option is often more efficient and thorough, this article will provide a step-by-step guide for those who prefer to handle the process independently.

Before diving into the removal process, it is crucial to compile a list of websites where your personal information is likely to be stored. This list may include various data broker sites, people search engines, and other platforms that aggregate personal data. Understanding where your information resides is the first step toward effective removal.

Data brokers typically fall into two categories: those that are easy to find and those that are less visible. The former often have public-facing sites designed for individuals to search for information, while the latter primarily sell data to businesses and may not appear in standard search results. Identifying these brokers can be challenging, but it is essential for a comprehensive removal strategy.

To locate your data, consider the following signals: where your data likely originated, such as companies you have shared information with, and any spikes in spam emails you may have experienced after signing up for services or entering giveaways. These indicators can help you identify potential data brokers that may be holding your information.

Once you have mapped out where your data is exposed, it is time to start the removal process. Begin with the most visible and high-risk sites, as these are the easiest for anyone to access. The typical process for removing information from these sites involves locating the opt-out page, submitting your request, and saving confirmation emails or screenshots as proof of your efforts.

Next, address less standardized sites that may have scraped your information from other sources. While these may require more effort to navigate, they often contain valuable contextual details about you, such as your job or interests. Look for privacy pages on these sites, as they may provide specific instructions for opting out.

The final category includes the least visible sites, which can be the most challenging to deal with manually. Many individuals encounter obstacles at this stage, making ongoing monitoring or automation beneficial. As you work through your list, keep track of your progress, as this will make it easier to manage future removal requests.

For those who find the manual process overwhelming, using a personal data removal service can be a worthwhile investment. These services handle the entire removal process on your behalf, eliminating the need for you to search for your data online or repeatedly return to data broker sites. They often perform a more thorough job than individuals can manage alone, requesting deletions from a wide range of websites, including those that may be difficult to find.

Many data removal services also offer features such as ongoing monitoring, alerts for new exposures, and the ability to submit additional removal requests as needed. Some even employ privacy specialists to handle these requests, ensuring a higher level of expertise in the process. Additionally, most services come with a 30-day money-back guarantee, allowing you to try them risk-free.

It is important to note that removing your personal information from the internet is not a one-time task. It requires persistence, strategy, and the right tools. While it can be frustrating to see your data reappear after removal, each step you take reduces your exposure and makes it more challenging for your information to circulate.

For those seeking the most control over their data, a manual approach provides a clear view of where your information resides. However, if you prefer consistency without the ongoing time commitment, a data removal service can alleviate that burden and continue working in the background.

Ultimately, the key to effective data removal is to stay proactive. Your personal information holds value, and recognizing this will change how you approach your online privacy. Have you ever faced the challenge of removing your personal information online only to see it resurface later? Share your experiences by reaching out to us at Cyberguy.com.

For more information on data removal services and to check if your personal information is exposed online, visit Cyberguy.com.

According to CyberGuy, taking control of your online privacy is an ongoing commitment that requires vigilance and the right resources.

Rockstar Games Confirms Limited Data Exposure in GTA 6 Breach

Rockstar Games has confirmed a limited data breach involving third-party vendor Anodot, with hacker group ShinyHunters demanding ransom but asserting that GTA 6 development remains unaffected.

A cybersecurity incident has emerged surrounding the highly anticipated Grand Theft Auto VI (GTA 6), as reports indicate that the hacker group ShinyHunters may have accessed systems related to Rockstar Games through a third-party vendor. This breach has garnered significant attention due to concerns over potential leaks or disruptions in the game’s development.

Initial assessments and Rockstar’s official statement suggest that the breach is limited to internal analytics data rather than critical game development files. This incident underscores the growing risks associated with third-party cloud services utilized by major gaming companies.

According to reports, ShinyHunters posted a ransom message on a dark web leak site, claiming to have accessed sensitive business information from Rockstar Games. The group allegedly demanded payment and threatened to release stolen internal data if their demands were not met by April 14, 2026. Despite these alarming claims, there is currently no confirmed evidence that the source code, gameplay footage, or story assets for GTA 6 were compromised.

Rockstar has acknowledged the occurrence of a third-party security incident but has downplayed its severity. The company confirmed that only a limited amount of non-material company information was accessed, emphasizing that no player data or game development assets were impacted.

Cybersecurity experts suggest that the attack did not directly target Rockstar Games’ servers. Instead, it appears that the hackers exploited a third-party Software as a Service (SaaS) provider known as Anodot, which offers analytics and cloud monitoring services. Anodot connects with Snowflake-based data warehouses that store enterprise-level analytics data. Through this ecosystem, attackers allegedly accessed linked systems without breaching Rockstar’s infrastructure directly.

This method of attack illustrates how modern cyber threats can bypass robust security measures by targeting weaker external vendors. Investigations indicate that the attackers stole authentication tokens through vendor integrations. These tokens functioned as secure digital keys, allowing trusted access between systems. Once acquired, these tokens may have enabled access to connected databases without requiring passwords or direct hacking attempts.

As companies increasingly rely on interconnected cloud platforms, this type of breach is becoming more common. However, it is reported that only analytics data was exposed, not sensitive development environments.

The timeline of events provides clarity on how the situation unfolded:

On April 11, 2026, ShinyHunters allegedly posted a ransom message on a dark web leak site claiming access to Rockstar-related data. By April 12, reports began circulating across cybersecurity outlets and gaming communities, prompting Rockstar to respond and confirm limited third-party data exposure. The hackers set a ransom deadline for April 14, 2026, while investigations continued into the vendor-side compromise involving Anodot and Snowflake systems.

Rockstar Games has responded promptly to the allegations, reiterating that only limited internal data was accessed. The company stated, “We can confirm that a limited amount of non-material company information was accessed in connection with a third-party data breach. This incident has no impact on our organization or our players.” They emphasized that the core systems for Grand Theft Auto VI remain secure and unaffected.

Currently, there is no evidence suggesting that the breach has impacted the GTA 6 release schedule. Industry sources indicate that development and marketing plans are proceeding as normal. Experts believe that Rockstar’s internal development environment is separated from analytics systems, which reduces the risk of direct exposure. While the breach raises concerns about third-party security, it does not appear to threaten game production or launch readiness.

The exposed data reportedly includes internal analytics such as performance metrics, operational dashboards, and business reporting data. This type of information helps companies track sales trends and internal performance but does not encompass gameplay content. Importantly, no source code, unfinished builds, or story-related materials have been confirmed as compromised, alleviating fears of spoilers or early leaks for fans eagerly awaiting GTA 6.

ShinyHunters has issued a ransom demand on the dark web, setting a strict deadline of April 14, 2026. They warned Rockstar Games to respond or face public data exposure and additional disruptive actions. The group stated, “Rockstar Games, your Snowflake instances were compromised thanks to Anodot.com. Pay or leak. This is a final warning to reach out by 14 Apr 2026 before we leak, along with several annoying (digital) problems that’ll come your way. Make the right decision, don’t be the next headline.”

Despite the threats, there is no confirmation that critical GTA 6 data is in their possession. Speculation regarding a potential delay in the game’s release has surfaced, but there is currently no official indication that Grand Theft Auto VI will be postponed due to this incident. Rockstar has reiterated that the breach involves non-material internal data and does not affect development systems.

Experts suggest that modern AAA studios like Rockstar typically isolate production pipelines from analytics platforms, minimizing risk. As of now, the GTA 6 launch timeline remains unchanged, and no delays are expected as a result of this cybersecurity incident, according to The Sunday Guardian.

Artemis II Astronauts Return After First Moon Mission in Over 50 Years

Four astronauts from the Artemis II mission successfully splashed down off the coast of San Diego, marking humanity’s first manned moon mission in over 50 years.

Four astronauts from the Artemis II mission completed a historic 10-day journey around the moon, splashing down off the coast of San Diego on Friday evening at 5:07 p.m. Pacific Time. This mission represents the first manned lunar expedition in more than half a century.

The crew launched from the Kennedy Space Center on April 1, embarking on a journey that took them approximately 252,000 miles from Earth, further than any previous human spaceflight mission. NASA Administrator Jared Isaacman, who landed on the USS John P. Murtha ahead of the splashdown, expressed confidence in the recovery team’s ability to assist the astronauts.

“I have no doubt that you’re all going to execute this flawlessly as we get these astronauts who have just completed an absolute historic mission, traveling further into space than any humans have gone before,” Isaacman stated.

He emphasized the significance of the mission, noting, “For the first time, we’ve gone into the lunar environment in more than half a century. We are back in the business of sending astronauts to the moon again.” Isaacman also highlighted future plans, mentioning that once Artemis III launches in 2028 for the first moon landing in decades, NASA intends to establish a permanent presence on the moon.

After their successful mission, the four astronauts—Commander Reid Wiseman, Pilot Victor Glover, Mission Specialist Christina Koch, and Mission Specialist Jeremy Hansen—were assisted out of the Orion crew module and taken aboard the USS John P. Murtha for medical evaluations.

The Orion spacecraft reentered Earth’s atmosphere at approximately 25,000 mph, utilizing an 11-parachute sequence to slow down to about 20 mph before landing in the ocean, roughly 60 miles off the coast. During reentry, temperatures outside the spacecraft soared to around 5,000 degrees Fahrenheit.

The last time astronauts traveled to the moon was in December 1972 during the Apollo 17 mission, three years after the historic Apollo 11 mission, which marked humanity’s first landing on the lunar surface in 1969.

This successful splashdown not only signifies a monumental achievement in space exploration but also paves the way for future lunar missions and the potential establishment of a moon base, according to Fox News.

Musk Criticizes WhatsApp Amid Privacy Class Action Scrutiny

Elon Musk criticized WhatsApp amid a class action lawsuit that raises significant privacy concerns regarding the Meta-owned messaging platform.

Elon Musk recently targeted WhatsApp on his social media platform, X, as the Meta-owned messaging service grapples with a class action lawsuit centered on privacy issues.

In a response to a post by user @cb_doge, Musk stated, “Can’t trust WhatsApp.” His comments come as the lawsuit, filed in early April in a California federal court, accuses Meta Platforms and WhatsApp of infringing on user privacy by allegedly permitting internal employees and third-party contractors to access private user messages.

The plaintiffs contend that while WhatsApp is marketed as an end-to-end encrypted messaging service—meaning that only senders and recipients can read the messages—the complaint alleges that internal systems may allow limited access to message content under certain circumstances.

According to the lawsuit, this access is purportedly utilized for purposes such as fraud detection, content moderation, and compliance with legal requests. However, the plaintiffs argue that these systems may extend beyond what is necessary, potentially granting Meta staff and outsourced contractors, including firms like Accenture, access to message data that users believed to be completely private. Furthermore, the lawsuit claims that users were not adequately informed about the existence of such access mechanisms, which they argue could constitute misleading privacy representations.

The case also references whistleblower allegations, which have not been independently verified in court, suggesting that internal tools or workflows might allow employees to retrieve or review message content in specific situations. The lawsuit argues that WhatsApp’s public assertion that “not even WhatsApp” can read users’ chats may be misleading if any internal access pathways exist.

Meta and WhatsApp have firmly denied the allegations, asserting that the platform employs end-to-end encryption and that message content is not accessible during normal operations. They characterize the lawsuit as inaccurate and reject the claim that private messages are routinely read or intercepted.

As the case is still in its early stages, none of the allegations have been proven in court, and the legal process is ongoing.

For WhatsApp, the immediate concern is less about the potential legal outcomes and more about public perception. Messaging applications rely heavily on the belief that conversations are private, and once that belief is called into question, users and regulators tend to scrutinize every aspect of data handling more closely. Even in the absence of a final judgment, sustained scrutiny can compel companies to enhance transparency regarding internal processes, tighten access controls, and clarify how human review systems interact with automated security tools.

For Elon Musk and X, the situation presents an opportunity to reinforce a long-standing narrative that positions X as an alternative ecosystem for users who may lose confidence in competing platforms. Public criticism of rivals also serves a strategic branding purpose, bolstering Musk’s broader message about openness and skepticism toward traditional tech incumbents.

However, this scrutiny also places X under similar expectations, as users and regulators often extend the same privacy inquiries to any major communication platform. The significance of this moment lies in its illustration of the fragility of trust in large-scale messaging services.

According to The American Bazaar, the ongoing developments in this case will likely shape the future landscape of digital communication and user privacy.

Humanoid Robots Enter Mass Production Phase in China

Humanoid robots are now being mass-produced in China, with a factory capable of rolling out one robot every 30 minutes, signaling a significant shift in the robotics industry.

A factory in China has begun producing humanoid robots at an unprecedented pace, marking a significant transition towards large-scale manufacturing and broader adoption of this technology. With one robot rolling off the assembly line every 30 minutes, the facility is set to produce approximately 10,000 units annually, moving beyond the prototype phase into full-scale production.

This production line is the result of a collaboration between Leju Robotics and Dongfang Precision Science & Technology. What distinguishes this facility is its highly structured and repeatable manufacturing process, which includes 24 precision assembly stages and 77 inspection steps to ensure quality before a robot leaves the line. This rigorous testing is crucial, as reliability has historically been a challenge for humanoid robots.

Efficiency has also seen significant improvements, with the company reporting a more than 50 percent increase in output compared to previous production methods. Additionally, the system’s flexibility allows for a seamless switch between different robot models without halting operations, enabling the factory to cater to various industries, from automotive to home appliances. This adaptability is essential for transitioning from innovative technology to practical business applications.

The robotics industry appears to be at a pivotal moment. It is no longer sufficient for companies to merely showcase what their robots can do; they must now demonstrate the ability to manufacture them at scale. This shift is evident across the market, with investors closely monitoring production figures. High output levels indicate that a company can move beyond demonstrations and into real-world deployment, reflecting confidence in actual market demand.

Another noteworthy development is the division of roles within the industry. In this case, Leju Robotics focuses on design and software, while Dongfang Precision Science & Technology manages production and scaling. This model mirrors the evolution seen in other tech sectors, where one group develops the technology and another focuses on mass production. Such a separation could accelerate advancements across the robotics landscape.

Despite these advancements, a significant challenge remains: software development. While constructing the physical bodies of robots is becoming easier, programming them to function effectively in real-world environments continues to be a complex task. Homes, warehouses, and public spaces present unpredictable scenarios, with varying object shapes, lighting conditions, and tasks that can confuse machines. Although factories can now produce thousands of robots, this does not guarantee that they will be immediately useful. The onus is now on AI developers to bridge this gap.

The implications of these developments may seem distant from everyday life, but they are closer than one might think. As production increases, costs typically decrease, paving the way for more businesses to adopt humanoid robots. We may soon see them in warehouses, retail settings, or service roles, raising important questions about employment, safety, and public comfort with machines that resemble humans. The rapid pace of this transition is particularly striking; what once felt experimental is now on the verge of mainstream integration.

Humanoid robots are entering a new phase in their development. The conversation has shifted from whether these robots can be built to how quickly they can be produced and where they will be deployed. Factories like the one in China are setting the standard, and the rest of the industry must keep pace.

As humanoid robots become more commonplace in workplaces, society must consider where to draw the line between beneficial automation and excessive reliance on technology. This evolving landscape invites public discourse on the future of work and human-robot interaction.

For more insights on technology and security, visit CyberGuy.com.

Folio Selected as Official Technology Platform for AAHOA Marketplace

Folio has been designated as the official technology platform for the AAHOA Marketplace, enhancing the purchasing and billing experience for the association’s members.

The Asian American Hotel Owners Association (AAHOA), the largest hotel owners’ association globally with over 20,000 members—predominantly Indian American—has announced that Folio will serve as the official technology platform for the AAHOA Marketplace.

This collaboration was unveiled during AAHOACON26, held in Philadelphia from April 8 to 10. Folio, a prominent financial operations platform, is set to launch an updated version of the AAHOA Marketplace later this year. This initiative aims to improve the purchasing and bill payment experience for AAHOA members, who collectively own 60% of the hotels in the United States, according to a media release.

Initially announced at last year’s AAHOACON, the AAHOA Marketplace, powered by Avendra International and bolstered by AAHOA’s collective buying power, provides hotel owners with access to trusted, high-quality products and services at reduced costs.

Key features of the upcoming Marketplace include:

Enhanced purchasing capabilities, allowing members to easily restock or shop across suppliers from a single platform;

Mobile optimization, enabling members to buy, track, and manage orders directly from their smartphones;

Rewards programs, where members can opt to receive cash back on qualified purchases and streamline their billing through Folio Pay;

Improved accounting features, including automatic reconciliation and spend categorization, enhanced by Folio’s AI technology.

The AAHOA Marketplace will continue to be free for all members and will be pre-loaded with exclusive deals and discounts tailored for AAHOA members, as stated in the release.

“The custom-built version of Folio will not only accelerate the delivery of savings in the AAHOA Marketplace but also provide a vital segment of the industry with access to powerful operating and payments technology,” said Folio CEO Kate Adamson.

“AAHOA members deserve the best technology and procurement solutions. Folio brings us closer to achieving that goal,” remarked AAHOA Chairman Kamalesh (KP) Patel. “By combining our strengths, Folio will simplify the process for our members to save both time and money.”

“This is a significant win for our members,” stated AAHOA Vice Chairman Rahul Patel. “The technology offered by Folio has traditionally been available only to the largest hotel groups. Together, we are creating a tailored solution for AAHOA members.”

“It is evident how Folio will enhance the procurement experience,” noted AAHOA President and CEO Laura Lee Blake. “The planned updates to the platform will enable members to discover more supplier deals and maximize their savings.”

AAHOA’s 20,000 members account for 60% of the hotels in the United States and contribute 1.4% to the nation’s GDP, according to the release. More than 1 million employees work at AAHOA member-owned hotels, generating $51.3 billion in annual earnings, and these hotels support 4.2 million jobs across various sectors of the hospitality industry.

The announcement of Folio as the official technology platform marks a significant step forward for AAHOA members, promising enhanced efficiency and savings in their operations.

According to The American Bazaar.

Space Travel Tickets Return as Prices Continue to Climb

Virgin Galactic has resumed ticket sales for suborbital space flights, but the price has risen to $750,000 per seat, reflecting the challenges and costs of commercial space travel.

Virgin Galactic has officially reopened ticket sales for its suborbital space flights, but prospective travelers will need to dig deeper into their pockets. The cost per seat has increased to $750,000, up from the previous price of $600,000. This price hike comes as the company prepares to accommodate over 675 customers who are eagerly waiting for their chance to experience space travel.

After nearly two years of pausing ticket sales, Virgin Galactic is making 50 new spots available for its upcoming flights. The company anticipates that flight testing will commence in the third quarter of 2026, with commercial service expected to begin in the fourth quarter of the same year. For those considering a booking, the waitlist is already substantial, indicating a strong interest in this unique experience.

However, it’s important to note that purchasing a ticket does not equate to a permanent move to space. The flights are short suborbital journeys lasting approximately 90 minutes. Virgin Galactic’s spaceplane is launched from a carrier aircraft at high altitude. Once released, the spaceplane ignites its rocket engine and ascends to the edge of space, allowing passengers to experience a few minutes of weightlessness before gliding back to Earth. This experience is more akin to a thrilling amusement park ride than a lengthy space mission, yet the allure of viewing Earth from above the atmosphere remains a significant draw for many.

While the prospect of traveling to space is undoubtedly exciting, the financial implications are considerable. The development and operation of reusable spacecraft are costly endeavors. Extensive testing is required, and safety regulations are stringent. When setbacks occur, they can significantly delay progress and increase costs.

Virgin Galactic has faced its share of challenges, including technical difficulties and tragic incidents. Notably, a test flight in 2014 resulted in the death of co-pilot Michael Alsbury, which has led the company to adopt a cautious approach to its operations. This history of setbacks contributes to the high ticket prices, as the limited number of flights and passengers necessitates premium pricing to sustain the business.

The company’s financial reports underscore the economic realities of the space tourism industry. In 2025, Virgin Galactic reported a net loss of $279 million and a negative free cash flow of $438 million, highlighting the substantial costs associated with building and scaling commercial spaceflight. CEO Michael Colglazier has indicated that ticket prices may continue to rise as the company increases production and testing efforts.

This latest ticket release is part of a new development phase for Virgin Galactic. The company plans to begin ground testing of its next-generation SpaceShip in April 2026, with flight testing slated for the third quarter of that year. Commercial flights using this new vehicle are still on track to launch in the fourth quarter of 2026. Additionally, a second SpaceShip is already in development and is expected to enter service between late 2026 and early 2027, which could further enhance flight frequency.

“We completed pivotal milestones during the first quarter of 2026, and with assembly of our first SpaceShip nearly complete and ground testing set to begin in April, we have released a limited number of Virgin Galactic Spaceflight Expeditions, each priced at $750,000,” said CEO Michael Colglazier. The company aims to transition from monthly flights to a twice-weekly schedule per ship, which could eventually lead to more accessible pricing.

The timing of this ticket relaunch is strategic, as Blue Origin has paused its tourist flights for at least two years. Meanwhile, SpaceX is currently focused on satellite launches, cargo missions, and government contracts. This leaves Virgin Galactic as the only active option for private individuals seeking a ticket to space at this time. Although the market for space tourism remains small, Virgin Galactic currently holds a unique position.

The overarching question for the industry remains: despite two decades of space tourism efforts, why have so few individuals actually traveled to space? The dream of making space travel more accessible is still a work in progress. Companies are striving to scale operations, and Virgin Galactic plans to increase its flight frequency from approximately four per month to as many as ten. If successful, this could eventually lead to lower ticket prices. However, the current equation remains straightforward: limited supply combined with high operational costs results in expensive tickets.

Even for those who may not be inclined to spend $750,000 on a 90-minute journey, the reopening of ticket sales is significant. It signals that space travel is inching closer to becoming a tangible consumer experience, albeit still out of reach for most. Moreover, the technological advancements developed for these flights often have broader applications, influencing various industries over time. This situation serves as a reminder of the nascent stage of space tourism; while it exists, it is far from mainstream and primarily funded by wealthy early adopters.

Virgin Galactic’s decision to resume ticket sales is a clear indication that the space tourism industry is not fading away but rather evolving. However, the elevated price point reflects the ongoing challenges of making space travel a viable option for the masses. For now, the view from above remains one of the most exclusive experiences that money can buy. Would you consider paying for a trip to space if prices became more affordable, or do the risks outweigh the thrill for you?

For further insights and updates on technology and security, visit CyberGuy.com.

Potential Discovery of New Dwarf Planet Challenges Planet Nine Theory

The potential discovery of a new dwarf planet, 2017OF201, may provide further evidence for the existence of the elusive theoretical Planet Nine in our solar system.

A team of scientists at the Institute for Advanced Study School of Natural Sciences in Princeton, New Jersey, has announced the potential discovery of a new dwarf planet, designated as 2017OF201. This large trans-Neptune Object (TNO) is located beyond the icy expanse of the Kuiper Belt and may challenge existing beliefs about the structure of our solar system.

TNOs are minor planets that orbit the Sun at distances greater than that of Neptune. While many such objects exist, 2017OF201 stands out due to its significant size and unusual orbit. The discovery was made by researchers Sihao Cheng, Jiaxuan Li, and Eritas Yang, who utilized advanced computational techniques to analyze the object’s unique trajectory.

“The object’s aphelion—the farthest point in its orbit from the Sun—is more than 1,600 times that of Earth’s orbit,” Cheng explained in a news release. “Meanwhile, its perihelion—the closest point to the Sun—is 44.5 times that of Earth’s orbit, which is similar to Pluto’s orbit.” The orbital period of 2017OF201 is estimated to be around 25,000 years, suggesting that it has undergone significant gravitational interactions with larger planets, leading to its current wide orbit.

Cheng further speculated on the object’s migration history, suggesting that it may have initially been ejected into the Oort Cloud, the most distant region of our solar system, before being drawn back into its current orbit. This hypothesis indicates a more complex dynamic in the outer solar system than previously understood.

The implications of this discovery are substantial, particularly concerning the ongoing search for Planet Nine, a theoretical planet proposed to exist in the outer solar system. In January 2016, astronomers Konstantin Batygin and Mike Brown from the California Institute of Technology (Caltech) presented research suggesting the presence of a planet approximately 1.5 times the size of Earth, located far beyond Pluto. However, Planet Nine remains unobserved, with its existence inferred from gravitational patterns affecting smaller objects in the Kuiper Belt.

According to the theory, if Planet Nine exists, it could be similar in size to Neptune and possess a mass up to ten times that of Earth. It is theorized to orbit the Sun at a distance of up to 30 times that of Neptune, taking between 10,000 and 20,000 Earth years to complete one orbit.

The discovery of 2017OF201 suggests that the region beyond the Kuiper Belt, previously thought to be largely empty, may harbor more celestial bodies than anticipated. Cheng noted that only about 1% of 2017OF201’s orbit is currently visible from Earth, underscoring the vastness of our solar system and the potential for future discoveries.

“Even though advances in telescopes have enabled us to explore distant parts of the universe, there is still a great deal to discover about our own solar system,” Cheng remarked.

As researchers continue to investigate the outer reaches of our solar system, the existence of Planet Nine remains a tantalizing possibility, with the gravitational influences of objects like 2017OF201 potentially providing critical insights into its nature. The ongoing study of such trans-Neptune Objects may ultimately reshape our understanding of the solar system’s architecture.

This research adds a new dimension to the ongoing exploration of our cosmic neighborhood, highlighting the complexity and dynamism of the solar system’s outer regions. The findings were reported in a recent news release, emphasizing the importance of continued observation and study of these distant celestial bodies.

According to NASA, the search for Planet Nine and the study of TNOs like 2017OF201 could help clarify the gravitational patterns observed in the outer solar system, potentially leading to a deeper understanding of our cosmic environment.

Kia Unveils 2027 Telluride Featuring First Hybrid and X-Pro Trims

The 2027 Kia Telluride debuts with a new turbocharged hybrid powertrain and an enhanced off-road X-Pro variant, reinforcing Kia’s commitment to innovation in the competitive three-row SUV market.

LOS ANGELES, CA – Kia has officially unveiled the second-generation 2027 Telluride, introducing a host of new features, including its first-ever turbocharged hybrid powertrain and a more capable X-Pro off-road variant.

Since its initial launch, the Telluride has established itself as a dominant force in the three-row SUV segment, often leading to long waitlists and numerous accolades. Despite its success, Kia opted for an evolutionary approach rather than a radical redesign, focusing on enhancements that align with its vision for a diversified and cleaner automotive future.

This decision comes at a critical time for the U.S. auto industry, as many traditional manufacturers are scaling back their electric vehicle (EV) and hybrid initiatives. With a shift in federal policy favoring fossil fuels, Kia remains committed to its electrification strategy, positioning itself as a leader in the market as it evolves.

The 2027 Telluride is designed and engineered specifically for the North American market, featuring a more rugged, “mountain-inspired” exterior and a luxurious interior that balances practicality with comfort.

The Telluride Turbo Hybrid combines a 2.5-liter turbocharged engine with a 1.65-kWh lithium-ion battery and electric motor, generating a robust 329 horsepower and 339 lb.-ft. of torque. For those prioritizing fuel efficiency, the Hybrid EX FWD trim boasts an EPA-estimated 35 MPG combined, offering a remarkable total driving range of up to 637 miles. This improvement addresses previous critiques regarding the fuel economy of its predecessor.

For traditionalists, the gasoline-only 2.5-liter turbo engine has also been upgraded, now delivering 274 horsepower and 311 lb.-ft. of torque, a nearly 50 lb.-ft. increase over the outgoing V6. Both the Hybrid and internal combustion engine (ICE) versions maintain impressive towing capacities, rated at 4,500 lbs and 5,000 lbs, respectively.

The interior of the 2027 Telluride features a “digital-first” transformation, highlighted by a large curved display with dual 12.3-inch panoramic screens. This setup runs Kia’s latest Connected Car Navigation Cockpit, which supports over-the-air updates, as well as wireless Apple CarPlay and Android Auto.

Kia has prioritized passenger comfort with new front relaxation seats that include power leg rests, while the driver benefits from an Ergo Motion seat equipped with a massage function. The second row now offers available captain’s chairs with power operation and climate control, and even the third row receives an upgrade with optional heating, ensuring all passengers enjoy a premium experience.

The Telluride’s physical dimensions have also expanded, featuring a longer wheelbase and increased overall length. This results in class-leading second-row legroom and enhanced cargo space, totaling 22.3 cubic feet behind the third row, even when fully loaded with eight passengers.

In response to the rising trend of “overlanding,” Kia has significantly enhanced the X-Pro trim. Unlike its predecessor, which primarily focused on aesthetics, the 2027 X-Pro is designed for serious off-road capability. It boasts an elevated ground clearance of 9.1 inches, wider all-terrain tires, and a new Electronic Limited Slip Differential.

To assist drivers in navigating challenging terrains, Kia has introduced a Ground View Monitor, providing a composite view of the area directly beneath the vehicle at low speeds. This feature is complemented by an off-road status screen that tracks pitch, roll, and steering angle, making the Telluride as adept on trails as it is on highways.

Safety remains a top priority for the 2027 Telluride, which aims for the IIHS Top Safety Pick+ rating. It includes 10 standard airbags, featuring a new front-row center airbag designed to prevent collisions between passengers during side impacts.

The suite of Advanced Driver Assistance Systems (ADAS) has also been expanded. Notable features include Highway Driving Assist 2, which assists with lane changes and maintains safe distances, and Digital Key 2.0, allowing owners to use their smartphones or Apple Watches as keys. Additionally, the Rear Occupant Alert uses radar sensors to detect movement in the rear seats, ensuring no child or pet is left behind.

To cater to modern families, Kia has integrated Entertainment and Data Services, enabling passengers to stream Netflix, YouTube, and Disney+ directly to the vehicle’s screens while parked. Sports enthusiasts can even customize their digital dashboards with themes from all 30 NBA teams.

The 2027 Telluride is already making its way into American showrooms, with the gasoline-powered LX trim starting at $39,190. The top-tier X-Pro SX-Prestige is priced at $56,790, while the Turbo Hybrid models start at $46,490 for the EX trim and reach up to $57,590.

Assembled in West Point, Georgia, the 2027 Telluride represents Kia’s commitment to maintaining its status as a leader in the family SUV market, blending innovation with practicality and luxury.

According to India West, Kia’s strategy reflects a broader commitment to sustainability and market leadership in the evolving automotive landscape.

The AI Revolution Is Expanding Beyond Tech, Says Venture Capitalist Ajay Mago

The AI revolution is transforming traditional industries, according to Ajay Mago, a venture capitalist who emphasizes the importance of generative AI in reshaping business operations and investment strategies.

Ajay Mago, a Chicago-based investor and lawyer, is co-founder of Twelvefold Ventures, a firm focused on harnessing generative AI to reshape industries beyond the tech sector. Mago believes that artificial intelligence is redefining how non-tech businesses compete, enabling sectors traditionally viewed as “traditional” to achieve growth akin to that of tech companies by integrating AI into their daily operations.

With a unique blend of legal expertise and venture capital experience, Mago advises founders on capital strategy, governance, risk management, and long-term scalability. His legal background, which includes partnerships at major firms like Mayer Brown, Jones Day, and Duane Morris, informs his approach to venture investing, especially as issues surrounding AI, data privacy, and liability become increasingly critical for startups and regulators alike.

In addition to his work at Twelvefold, Mago is an investor and advisor to Censius, a company specializing in AI observability and model monitoring. He is actively involved in various business and civic organizations, including The Economic Club of Chicago and the U.S. India Chamber of Commerce of Dallas Ft. Worth. His professional endeavors span across major cities like Chicago, Dallas, and Austin, highlighting Texas’s growing significance as a technology and innovation hub.

Mago, a proud alumnus of The University of Texas, holds a law degree and both bachelor’s and master’s degrees from the McCombs School of Business. Through Twelvefold, he collaborates closely with founders to build and validate new companies from their inception. The firm provides initial capital while its studio offers operational support and technical expertise, enabling entrepreneurs to swiftly transition from concept to execution, particularly in applying foundational AI models across various business verticals.

In an exclusive interview with The American Bazaar, Mago discussed the evolving technology landscape, the future of AI regulation, and the changing dynamics of venture investing beyond traditional coastal hubs.

Mago noted that Texas, particularly Dallas, is emerging as a vibrant tech and venture capital hub, with comparisons being drawn to Silicon Valley. He emphasized the diversified economy of Texas, where cities like Austin and Houston contribute to a strong foundation for innovation. “There are strong legal industries across these cities, and the tools for capital efficiency are present,” he explained. “Founders are reinvesting into the local startup community, which has gained momentum over the past decade.”

He highlighted that Texas is home to many Fortune 100 companies, which fosters executive talent and robust educational systems. This combination creates a fertile environment for high-quality founders, many of whom have succeeded in non-tech fields. Mago pointed out that the Silicon Valley playbook is now being applied in Texas, where traditional businesses are integrating technology to enhance their operations.

When discussing the industries currently prioritized for investment, Mago mentioned sectors such as manufacturing, healthcare, insurance, agriculture, advertising, legal services, financial services, and energy. He noted that generative AI is significantly impacting these industries, allowing businesses that previously did not view themselves as technology-driven to unlock technology-style growth.

As traditional businesses adopt AI, Mago emphasized the importance of structuring data responsibly amid increasing regulatory scrutiny and privacy concerns. He stated that accountability and transparency are crucial, particularly as technology becomes more integrated into everyday life. “The first company we started, Censius.ai, has always focused on observability and monitoring,” he said, underscoring the need for businesses to audit their technology effectively.

Mago also shared insights into some of the AI companies he has invested in, including Censius.ai, which focuses on machine learning and AI observability. He mentioned Location Matters, a company that combines geolocation information systems with AI, and Attri.ai, which enables business users to access AI directly, streamlining the development process and reducing costs.

Addressing concerns about the potential overhype surrounding AI investments, Mago acknowledged the skepticism but emphasized the tangible impact of AI technologies. He compared the current AI landscape to the transformative effects of services like Uber and Amazon, suggesting that the accessibility of AI tools will lead to significant economic impacts across various industries.

On the regulatory front, Mago expressed the need for a comprehensive framework that addresses the evolving nature of technology businesses. He highlighted the importance of rethinking liability for tech companies, especially as they become more integrated into everyday business practices. “There needs to be a revisiting of how we think about liability for technology companies,” he stated, advocating for a balanced approach that combines federal regulations with state-level experimentation.

As for the impact of AI on India, Mago acknowledged the potential disruptions, particularly in lower-level coding jobs. He noted that while AI simplifies certain tasks, it also introduces new complexities that require skilled oversight. He emphasized that India’s strength lies in its ability to innovate on a budget, which could position the country favorably in the evolving AI landscape.

Mago’s commitment to his work is evident in his frequent travels between Chicago and Texas, where he balances his roles in venture capital and law. He anticipates that the U.S. will continue to develop AI regulations that promote innovation while addressing concerns around bias and data privacy.

In conclusion, Ajay Mago’s insights reflect a deep understanding of the intersection between AI, business, and regulation. As the landscape continues to evolve, his work at Twelvefold Ventures positions him at the forefront of the AI revolution, which is increasingly taking shape outside of traditional tech hubs.

According to The American Bazaar, Mago’s perspective underscores the importance of adapting to the changing dynamics of venture investing and the critical role of generative AI in shaping the future of various industries.

Meta Introduces ‘Muse’ AI Model in Superintelligence Initiative

Meta has launched its new AI model, Muse, as part of its initiative to develop superintelligent systems, showcasing advanced capabilities and a strategic investment approach.

In a significant advancement in artificial intelligence, Meta has unveiled its latest AI model, dubbed “Muse.” This introduction marks a pivotal step toward the development of more sophisticated, general-purpose AI systems. The announcement coincides with the company’s intensified efforts within its newly established research team focused on superintelligence.

Meta describes Muse as a model designed to enhance understanding and generate complex outputs across various domains. This development indicates a strategic shift toward more adaptable AI systems. According to the company, Muse represents “a step forward in building systems that can reason, create, and assist in more open-ended ways.” Researchers have emphasized that Muse is part of a larger initiative to transcend the limitations of narrow AI applications.

In an official blog post, Meta highlighted that Muse aims to “unlock more general intelligence capabilities,” noting that the system is engineered to manage a broader array of tasks with enhanced coherence and contextual understanding. The company also mentioned that such models could eventually facilitate more immersive digital experiences, including content creation and interactive environments.

This launch is in line with Meta’s long-term strategy to compete with leading players in the AI sector by making substantial investments in foundational models and infrastructure. The company has increasingly concentrated on developing in-house capabilities while forging strategic partnerships to bolster its position in the rapidly evolving AI landscape.

Evidence of this strategy was seen in June 2025, when Meta finalized a major investment in Scale AI, valuing the startup at approximately $29 billion. Scale AI is known for providing labeled data and infrastructure that are crucial for training machine learning models. This investment underscores Meta’s recognition that high-quality data pipelines are essential for developing more powerful AI systems like Muse.

By investing in Scale AI, Meta aimed to secure access to advanced data-labeling tools and expertise, which are vital for enhancing model accuracy and performance. Analysts interpreted the deal as part of a broader strategy to vertically integrate AI development, encompassing everything from data processing to model deployment.

With the introduction of Muse, Meta is signaling its intent to remain at the forefront of AI innovation. The company’s blend of internal research and strategic investments reflects a long-term commitment to creating systems that could eventually rival human-level reasoning in specific domains. As competition heats up across the AI sector, Meta’s latest initiative underscores both the scale of its ambitions and the resources it is prepared to allocate to realize them.

This information is based on insights shared by The American Bazaar.

Healthcare Data Breach Affects System Containing Patient Records

CareCloud has confirmed a significant data breach involving its electronic health record system, with hackers gaining access for approximately eight hours on March 16, raising concerns about potential data exposure.

CareCloud, a provider of healthcare technology solutions, has reported a serious security incident involving unauthorized access to one of its electronic health record systems. The breach occurred on March 16 and lasted for about eight hours, prompting an investigation into the extent of any potential data exposure.

While CareCloud has confirmed the breach, it has not yet determined whether any patient records were accessed or compromised. The company is currently working with external cybersecurity experts to assess the situation and understand the implications of the breach.

The incident highlights ongoing vulnerabilities within the healthcare sector, which has seen a rise in data breaches in recent years. CareCloud operates multiple environments for storing patient records, and according to a filing with the U.S. Securities and Exchange Commission, the attackers gained access to one specific environment. Fortunately, CareCloud stated that the breach was contained to this single environment and did not affect its other systems or platforms.

Despite this containment, the key question remains whether any data was exfiltrated from the system. The potential for stolen health data to be used for identity theft, insurance fraud, and other scams underscores the seriousness of such breaches. Healthcare organizations hold vast amounts of sensitive personal information, including names, Social Security numbers, and medical histories, making them attractive targets for cybercriminals.

The CareCloud breach serves as a reminder of the interconnected nature of healthcare infrastructure. The company supports over 45,000 providers and millions of patients, meaning that any security incident can have widespread implications. The scale of the breach is further compounded by the fact that many healthcare providers utilize cloud services, such as Amazon Web Services, to manage their data. While these platforms offer scalability and flexibility, they also necessitate stringent security measures to prevent unauthorized access.

As the investigation continues, CareCloud has not disclosed detailed technical information about its systems or how data is separated and backed up across its environments. Understanding these aspects is crucial, as they could influence how far attackers were able to navigate within the system once they gained access.

Even if you are unfamiliar with CareCloud, it is possible that your healthcare provider utilizes its services. This reality illustrates how breaches at behind-the-scenes companies can ultimately impact patients. Although there is currently no confirmation that patient data was stolen, it is essential for individuals to remain vigilant. Notifications regarding potential data exposure may take weeks or even months to be issued.

In light of this breach, individuals are encouraged to adopt proactive measures to protect their personal information. Regularly reviewing explanation of benefits statements and billing records for any unfamiliar charges or services is a good practice. Even minor discrepancies can indicate potential fraud, and it is advisable to contact your insurer or healthcare provider immediately if something appears amiss.

Healthcare data can be exploited to open fraudulent accounts, file false claims, or commit identity theft. Identity theft protection services can monitor personal information, such as Social Security numbers and email addresses, alerting users if their data is found on the dark web or used to create unauthorized accounts. Additionally, these services can assist in freezing bank and credit card accounts to prevent further misuse.

To further safeguard against potential threats, individuals should be cautious of emails related to medical updates or billing issues, as these can often contain malicious links or attachments. Utilizing strong antivirus software can help detect threats before they cause harm. It is also advisable to secure patient portals with unique passwords and enable two-factor authentication (2FA) when available, adding an extra layer of security.

After a breach, it is common for scammers to impersonate healthcare providers, reaching out via email, text, or phone calls. Individuals should verify the source of any communication before clicking links or sharing personal information. When in doubt, it is best to contact the provider directly using official contact information.

The CareCloud data breach is still unfolding, and the uncertainty surrounding it reflects the complexities of healthcare systems. These systems often rely on multiple vendors, cloud services, and interconnected tools, creating numerous entry points for cybercriminals. Even with prompt responses to breaches, the repercussions can linger long after the initial incident.

As the landscape of healthcare technology continues to evolve, the responsibility for safeguarding sensitive health data remains a pressing concern. The CareCloud incident serves as a stark reminder of the vulnerabilities inherent in the healthcare sector and the importance of robust security measures.

For more information on this developing story, stay tuned for updates. According to Fox News, the investigation is ongoing, and further details will be released as they become available.

Artemis Astronauts Experience Communication Blackout on Moon’s Far Side

The Artemis II crew experienced a historic 40-minute communication blackout as they passed behind the Moon, marking a significant milestone in deep space exploration.

The Artemis II crew officially entered a communications blackout on Monday evening as their spacecraft moved behind the Moon’s far side, setting new distance records in the process. This unprecedented moment began at approximately 6:44 p.m. ET, during which the astronauts—Reid Wiseman, Victor Glover, Christina Koch, and Canadian astronaut Jeremy Hansen—became the most isolated humans in deep space history.

The blackout occurred as the spacecraft lost line of sight to Earth, with the Moon obstructing satellite communications entirely. Contact is anticipated to resume around 7:25 p.m. ET, coinciding with a moment known as “Earthrise,” when Earth reappears over the Moon’s horizon.

NASA has assured that there are no specific dangers anticipated during this mission, although ground control is prepared for potential contingencies. The astronauts have practiced essential tasks, such as consuming protein shakes and administering medication, while wearing their bulky orange launch and entry suits. This preparation is crucial in case they need to remain in their gear for an extended period.

In addition to the communication blackout, the Artemis II crew will achieve several significant milestones. At approximately 7:05 p.m. ET, the spacecraft is expected to reach its farthest point from Earth, at a distance of 252,760 miles. This surpasses the Apollo 13 record by roughly 4,105 miles, marking a notable achievement in space exploration.

At their closest approach, the Moon will appear about the size of a basketball held at arm’s length, according to NASA. Although ground control and the science evaluation room will not be able to communicate with the astronauts during this blackout period, the crew will continue to execute their lunar targeting plan and conduct scientific observations.

The astronauts are set to track historic Apollo sites, scout potential future landing zones, and capture rare views of nearby planets, including Mercury, Venus, Mars, and Saturn. They will also have the unique opportunity to observe a solar eclipse from the Orion spacecraft’s vantage point.

Earlier in the day, the crew broke a distance record previously held by Apollo 13, which was set in 1970. This achievement underscores the significance of the Artemis II mission as a pivotal step in humanity’s exploration of deep space.

According to NASA, the Artemis II mission is not only a remarkable technical achievement but also a historic moment in the ongoing journey of human exploration beyond Earth.

Android Security Flaw Allows Hackers to Unlock Phones in Under a Minute

Researchers have identified a critical vulnerability in certain MediaTek processors that could allow hackers to bypass Android lock screens and access sensitive data in under a minute.

Your phone’s lock screen serves as a vital barrier against unauthorized access, protecting your personal information from prying eyes. However, a newly discovered vulnerability affecting specific Android devices powered by MediaTek processors poses a serious risk, enabling attackers to bypass these security measures in less than a minute.

Once exploited, this flaw allows hackers to recover your phone’s PIN, unlock encrypted storage, and extract sensitive information, including cryptocurrency wallet seed phrases. Security experts estimate that approximately one in four Android devices may be at risk, particularly among budget-friendly models.

The vulnerability, tracked as CVE-2026-20435 in the National Vulnerability Database, impacts Android phones that utilize a security component known as Trustonic’s Trusted Execution Environment (TEE). This technology is designed to safeguard sensitive data, such as encryption keys, from unauthorized access. However, analyses reveal that the protections offered by TEE can be bypassed on affected devices.

By connecting a compromised phone to a computer via USB, an attacker with physical access can exploit the vulnerability during the early boot process. This could expose sensitive data before the device’s full security measures are activated. In essence, it is akin to accessing a master key before a safe door has even closed.

Once attackers gain access to these low-level components, they can potentially access encrypted storage without needing the user’s PIN. In the worst-case scenario, this could lead to the extraction of highly sensitive information, including personal photos, stored passwords, private messages, financial data, and cryptocurrency wallet credentials. If seed phrases for crypto wallets are compromised, attackers could drain funds permanently.

Addressing this issue is complicated, as it originates at the processor level, which is manufactured by MediaTek. The company has announced a firmware patch to mitigate the vulnerability, but individual phone manufacturers must distribute this update through their security protocols. Depending on the device and its support status, the rollout of these updates may vary significantly.

Fortunately, this type of attack necessitates physical access to the device and a USB connection to a computer, meaning it cannot be executed remotely. However, if your phone is stolen, briefly confiscated, or even taken for repairs, an attacker could potentially exploit this vulnerability to extract sensitive information.

If you are uncertain whether your device is affected by this vulnerability, you can verify your phone model on platforms like GSMArena or your manufacturer’s website to identify the system-on-chip (SoC) it uses. Cross-reference this information with MediaTek’s March security bulletin under CVE-2026-20435 by visiting corp.mediatek.com/product-security-bulletin/March-2026 to check for affected chipsets.

To determine if your phone is at risk, follow these steps: Go to Settings, select About phone, and find your exact model name. Then, search for your phone model on GSMArena or your manufacturer’s website to identify the processor. Devices equipped with Qualcomm Snapdragon or Google Tensor chips are not susceptible to this specific issue.

Additionally, check your phone’s system update settings and install any available updates from your manufacturer. Navigate to Settings, select Software update, and install any updates that may be available. While MediaTek has released a fix, it is crucial to ensure that your device manufacturer distributes it promptly.

For those using affected devices, taking a few simple precautions can help mitigate the risk of unauthorized access to your data. Although a security app cannot resolve this processor-level flaw, it can protect your phone from other threats that may arise after a device is compromised. While it won’t stop this specific exploit, it can detect malicious applications, spyware, and suspicious activities that attackers might install after gaining access.

If you store sensitive information such as cryptocurrency wallet seed phrases, recovery codes, or important documents in notes apps or screenshots, consider relocating them to a secure offline location. If someone exploits this vulnerability, that information could be exposed.

Since this exploit requires physical access to your phone, it is essential to avoid leaving your device unattended in public places and exercise caution when handing it over to repair shops or unfamiliar technicians. Physical access significantly increases the risk of data extraction.

While the vulnerability undermines encryption on affected devices, maintaining strong lock settings can still protect against many other threats. Opt for a longer PIN or passcode instead of simple patterns, and enable automatic locking after short periods of inactivity.

Even if attackers gain access to your device’s data, enabling two-factor authentication (2FA) can prevent them from logging into your online accounts. Implement 2FA for email, banking apps, cloud storage, and social media accounts whenever possible.

A password manager can securely store your login credentials in an encrypted vault, preventing them from being scattered across various apps and notes. If your device is compromised, the password manager still protects your accounts with strong encryption, requiring attackers to breach another layer of security before accessing your logins.

Some Android devices limit USB data access when locked. Activating this setting can reduce the risk of unauthorized data extraction through a wired connection, especially in situations where someone briefly gains physical access to your phone. For Samsung phones running the latest software, navigate to Settings, tap Lock screen, then select Secure lock settings. Enter your current PIN, enable “Lock network and security,” or a similarly named option to block USB data access while your device is locked.

This vulnerability highlights a broader issue within the Android ecosystem. Even when chipmakers release fixes, millions of devices rely on manufacturers to deliver updates, which may not occur, particularly for lower-cost models that quickly lose support. While users often assume that their lock screen and encryption will safeguard their data if a phone is lost or stolen, incidents like this reveal that such protection is only as robust as the update policies that support it.

Should phone manufacturers be required to guarantee security updates for several years if their devices contain critical encryption vulnerabilities? Let us know your thoughts by reaching out to us at CyberGuy.com.

For more information, visit CyberGuy.com for tech tips, urgent security alerts, and exclusive deals.

According to CyberGuy.

Industrial Exoskeletons Enhance Worker Efficiency While Reducing Strain

Industrial exoskeletons are transforming the workplace by reducing physical strain on workers, enabling them to perform demanding tasks more efficiently and with less fatigue.

Industrial exoskeletons are innovative wearable systems designed to assist workers by sharing the physical load during demanding tasks, such as overhead lifting and repetitive bending. These devices help alleviate muscle strain and fatigue, allowing employees to maintain productivity throughout their shifts.

For those who have spent long hours lifting, drilling overhead, or bending over conveyor belts, the onset of fatigue can be rapid and debilitating. This is where industrial exoskeletons come into play. By strapping onto the body, these systems help distribute the weight, allowing workers to rely less on their muscles and more on the supportive technology. As a result, workers experience reduced strain and can work longer without succumbing to fatigue. This technology is already being implemented on job sites across the United States.

Industrial exoskeletons fall into three primary categories, each tailored to different types of work environments and tasks. Passive systems, for instance, do not rely on motors or batteries. Instead, they utilize springs or mechanical structures to redistribute weight effectively. A notable example of this is the Hilti EXO-O1, a shoulder harness that transfers the weight of the arms to the hips using spring-loaded supports. Testing has shown that it can reduce shoulder muscle load by up to 47% during overhead tasks, making tools feel significantly lighter by the end of the day.

Another passive system is the Laevo Flex, which provides spring-based assistance to support the lower back during bending and lifting. This system is designed for dynamic movement, allowing workers to walk and lift without needing to activate or deactivate the device. The Laevo Flex is also adjustable and built for extended wear in various environments, including outdoor settings. Like other passive systems, it effectively reduces strain on the lower back during repetitive tasks without the need for motors or batteries.

While passive systems are relatively lightweight, typically weighing between 4.4 and 8.8 pounds, they do not adapt automatically to different tasks in real time. In contrast, powered exoskeletons utilize motors, sensors, and onboard processors to actively assist movement. The German Bionic Exia is an example of a battery-powered back exoskeleton designed for warehouse and logistics work. This system actively supports the lower back during lifting, helping to reduce strain and fatigue over time. Powered exoskeletons can track motion using sensors and provide almost instantaneous support, making the assistance feel seamless and natural.

These powered systems can significantly lessen the effort required for repetitive lifting tasks, particularly in high-volume environments. However, they come with trade-offs. Some powered exoskeletons can weigh over 40 pounds, depending on their design, and they are often much more expensive, costing tens of thousands of dollars. As a result, many companies introduce them through pilot programs before broader implementation.

Soft exosuits represent another advancement in this technology. Using fabric, straps, and tension systems instead of rigid frames, these lightweight systems, such as the HeroWear Apex 2, weigh about three pounds and assist with lifting movements. Testing in warehouse environments has demonstrated that soft exosuits can enhance productivity while reducing reported lower back discomfort among workers engaged in repetitive tasks. These systems allow for more natural movement than their rigid counterparts, although they provide less force and are better suited for repetitive tasks rather than heavy lifting.

The benefits of exoskeletons are particularly evident in everyday tasks that place significant strain on the body. For example, holding tools overhead can lead to considerable shoulder and neck strain. Systems like the Hilti EXO-O1 can reduce muscle load by up to 47%, making tools feel much lighter. Back support systems, such as the Laevo FLEX, can decrease muscle effort by up to 30% during lifting, while soft systems like the HeroWear Apex 2 help mitigate fatigue during constant bending.

Despite their advantages, exoskeletons are not without limitations. Proper fit is crucial; if a device does not align correctly with a worker’s body, it can lead to discomfort or restricted movement. Additionally, even lightweight systems add extra load, and powered systems can be particularly cumbersome. Cost remains a significant barrier for many companies, with passive systems typically costing a few thousand dollars and powered systems often exceeding tens of thousands. Experts recommend using exoskeletons in conjunction with proper ergonomics and regular movement to avoid potential long-term issues, such as reduced muscle engagement.

For workers involved in physical labor, this technology has the potential to transform daily experiences. Employees may find themselves feeling less sore at the end of their shifts, reducing the risk of injury over time and enabling longer work periods without the same level of fatigue. For employers, the advantages are clear: fewer injuries, reduced absenteeism, and enhanced productivity. As adoption of this technology continues to grow, many workplaces are currently testing these systems before implementing them more broadly.

While it may be tempting to think of ordering an exoskeleton like any other piece of equipment, most industrial exoskeletons are sold directly to companies rather than individuals. Manufacturers typically engage with employers through pilot programs or bulk orders, making them less accessible through standard retail channels. Some lighter systems, particularly passive or soft exosuits, may be easier to obtain, but many brands still prefer to sell through business channels or approved partners.

For those interested in exploring this technology, starting with the manufacturer’s website is advisable. Look for options such as “request a demo” or “contact sales,” which are often the first steps toward any potential purchase. As adoption increases, access to these systems may become more widespread.

Industrial exoskeletons are rapidly transitioning from experimental trials to real-world applications. They are not intended to replace human workers but rather to assist them in working smarter and safer. As technology continues to advance, we can expect lighter designs, improved comfort, and more intelligent assistance, potentially redefining the landscape of physically demanding work in the years to come. According to CyberGuy, the future of work may be significantly altered by these innovations.

Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a novel electronic tattoo, or “e-tattoo,” designed to monitor mental workload in high-stress professions by measuring brain activity and cognitive performance.

In a groundbreaking study published in the journal Device, scientists have introduced an innovative electronic tattoo device, or “e-tattoo,” that can be applied to the forehead to help individuals in high-pressure work environments track their brainwaves and cognitive performance.

Dr. Nanshu Lu, the senior author of the research from the University of Texas at Austin, emphasized the significance of mental workload in human-in-the-loop systems, noting its direct influence on cognitive performance and decision-making. The e-tattoo is particularly aimed at professionals in demanding roles such as pilots, air traffic controllers, doctors, and emergency dispatchers.

According to Dr. Lu, the technology could also benefit emergency room doctors and operators of robots and drones, enhancing their training and performance. One of the primary objectives of the study was to develop a method for measuring cognitive fatigue in high-stakes careers.

The e-tattoo is designed to be temporarily affixed to the forehead and is notably smaller than existing devices. It operates by utilizing electroencephalogram (EEG) and electrooculogram (EOG) technology to measure brain waves and eye movements, offering a compact and cost-effective alternative to traditional EEG and EOG machines, which tend to be bulky and expensive.

Dr. Lu explained that the e-tattoo is “as thin and conformable to the skin as a temporary tattoo sticker,” making it a practical solution for real-time monitoring of mental workload. She highlighted that understanding human mental workload is essential in the fields of human-machine interaction and ergonomics due to its impact on cognitive performance.

The study involved six participants who were tasked with identifying letters displayed on a screen. The letters flashed one at a time in various locations, and participants were instructed to click a mouse if either the letter or its location matched a previously shown letter. Each participant completed the task multiple times, with varying levels of difficulty.

The researchers observed that as the difficulty of the tasks increased, the brainwave activity detected by the e-tattoo shifted, indicating a corresponding rise in mental workload. The device comprises a battery pack, reusable chips, and a disposable sensor, making it both practical and efficient.

Currently, the e-tattoo exists as a lab prototype, with a production cost of approximately $200. Dr. Lu noted that further development is necessary before it can be commercialized, including real-time mental workload decoding and validation in more realistic environments.

This innovative technology holds promise for enhancing performance and well-being in high-stress jobs, providing a new tool for monitoring cognitive load and potentially improving decision-making processes in critical situations.

For more information, refer to the study published in Device.

Hims & Hers Reports Breach of Customer Support System

Hims & Hers, a telehealth company, reported a data breach involving its customer support system, with hackers accessing personal information between February 4 and February 7, 2026.

Hims & Hers, a telehealth company specializing in weight loss medications and sexual health prescriptions, has confirmed a data breach affecting its third-party customer service platform. The company disclosed the incident in a notice filed with the California attorney general’s office on Thursday.

According to Hims & Hers, hackers infiltrated its third-party ticketing system between February 4 and February 7, stealing a significant number of support tickets that contained personal information submitted by customers. The breach notice indicated that the stolen data included customer names, contact information, and other unspecified personal details, which the company chose to redact in its communication.

While Hims & Hers assured customers that their medical records were not compromised, the nature of the customer support system means that the data could still contain sensitive information regarding individuals’ accounts and healthcare. The company has not disclosed the number of individuals affected by the breach. Under California law, companies must report data breaches that impact 500 or more residents of the state.

“Customer medical records were not impacted by this incident, and neither were communications with healthcare providers on the platform,” the company stated. Hims & Hers is currently reviewing its policies and procedures to prevent similar intrusions in the future and has notified federal law enforcement. The company will also inform regulators if required.

Jake Martin, a spokesperson for Hims & Hers, explained to TechCrunch that the breach was the result of a social engineering attack, where hackers deceived employees into granting access to their systems. He noted that the stolen data “primarily included customer names and email addresses.” However, the company did not specify the exact types of data taken when questioned by TechCrunch.

Additionally, Hims & Hers did not indicate whether it received any communication from the hackers, such as ransom demands. As of now, no hacking group has claimed responsibility for the attack, and the stolen data has not appeared publicly. Information generated by healthcare organizations is often highly sought after by criminals due to its potential for misuse in phishing and identity theft schemes.

In recent years, customer support and ticketing systems have become increasingly attractive targets for hackers. Financially motivated cybercriminals have been known to raid databases containing customer information and extort companies for ransom. For instance, last year, Discord experienced a data breach affecting its customer support ticketing system, which exposed government-issued IDs of approximately 70,000 individuals who had submitted their driver’s licenses and passports for age verification.

This incident underscores the growing risks associated with data security in the telehealth sector and highlights the importance of robust cybersecurity measures to protect sensitive customer information.

For more details, refer to TechCrunch.

Responsible AI Is Essential for Building Trust in a Fragmented World

Artur Turemka discusses the critical role of responsible AI in fostering trust and navigating regulatory challenges in the global fintech landscape during a recent podcast episode.

As artificial intelligence continues to transform payments, commerce, and global expansion, a pressing question emerges: how can businesses build a truly global platform amidst a landscape of local regulations? This topic was explored in depth on the “CAIO Connect” podcast, hosted by Sanjay Puri, featuring Artur Turemka, Chief Global Growth Officer at Autopay. The episode, recorded during the World Economic Forum in Davos, provides valuable insights into the intersection of AI, fintech, and regulatory frameworks in today’s digital economy.

Turemka operates at the forefront of fintech innovation and international growth. In his role at Autopay, he is tasked with expanding the company’s reach beyond Poland and Europe while maintaining the trust that is essential to financial services. Autopay specializes in facilitating seamless payments for merchants, ensuring that transactions are executed quickly, securely, and without interruption.

A pivotal moment in the podcast is Turemka’s introduction of the “Zero Delay Economy” concept. This initiative goes beyond merely expediting payments; it aims to provide merchants with greater freedom, independence, and time. Turemka emphasizes that when payment processes function smoothly, businesses can concentrate on what truly matters: fostering growth and enhancing customer relationships.

When Puri inquires about the role of AI at Autopay, Turemka makes it clear that AI is integrated throughout the organization. From fraud detection and transaction acceleration to enhancing internal productivity, AI plays a crucial role in driving efficiency at every level. In the realm of payments, AI bolsters trust by identifying anomalies and preventing fraudulent activities in real time. Additionally, it empowers employees by streamlining daily tasks and facilitating quicker decision-making.

The key takeaway from Turemka’s insights is straightforward yet impactful: AI should be utilized to enhance outcomes for both customers and teams, rather than being deployed merely for the sake of novelty.

Operating within the financial services sector entails navigating a landscape of stringent regulatory oversight. Turemka underscores the importance of compliance and data protection, stating that these priorities are paramount. Whether adhering to Polish regulations, European laws such as GDPR, or other jurisdiction-specific guidelines, Autopay is committed to ensuring that customer data is handled responsibly and ethically.

Given that AI systems often depend on extensive amounts of sensitive data, Turemka highlights a crucial leadership lesson: responsible AI is not optional in fintech; it is essential for establishing long-term trust.

One of the more candid moments in the podcast revolves around the challenges of regulation. While the aspiration is to create global platforms, Turemka acknowledges that unified global regulations are currently unrealistic. Instead, Autopay adopts a market-by-market approach, investing in compliance and drawing lessons from best practices across different regions.

Turemka notes that this strategy is not without its difficulties, but it is necessary for achieving global growth. Flexibility, patience, and a readiness to operate within diverse regulatory frameworks while upholding a consistent value proposition are critical components of success.

As a co-host of the Leaders Forum Poland, Turemka also shares insights into Poland’s emerging role on the global innovation stage. He advocates for viewing AI not through a national lens, but as part of a global ecosystem driven by talent, ambition, and collaboration. Poland’s increasing entrepreneurial success and economic momentum reflect this broader perspective.

In conclusion, Turemka leaves listeners with a powerful message: progress is rooted in dialogue and partnership. In times of complexity, breaking down barriers, collaborating across sectors, and remaining open to conversation are vital for driving meaningful innovation.

As the episode draws to a close, one theme resonates strongly: scaling AI and fintech on a global scale is not merely a technical challenge; it is fundamentally a human one. Ultimately, trust—more than technology—remains the most valuable currency in this evolving landscape.

According to The American Bazaar.

Artemis II Performs Key Lunar Burn for Historic Deep-Space Mission

The Artemis II mission has successfully transitioned to a lunar trajectory, marking a significant milestone in human space exploration with its four-member crew set for a historic journey.

The four-member crew of NASA’s Artemis II mission has successfully transitioned from Earth’s orbit to a lunar trajectory following a flawless translunar injection (TLI) burn. This maneuver, executed late Thursday, officially commits the Orion spacecraft to a high-stakes, eight-day journey that will carry humans to the vicinity of the moon for the first time since 1972. As the first crewed flight of the Space Launch System (SLS) rocket and the Orion capsule, Artemis II serves as a pivotal stress test for deep-space life-support systems and navigation. By the end of this mission, the crew is expected to set a new record for the farthest distance humans have ever traveled from Earth, surpassing the benchmark set by the Apollo 13 mission over five decades ago.

CAPE CANAVERAL, Fla. — NASA’s Artemis II mission entered its most ambitious phase on Thursday evening as the Orion spacecraft’s main engine fired for nearly six minutes, accelerating the vehicle to escape velocity and setting a course for the moon. The maneuver, known as the translunar injection (TLI) burn, took place approximately 25 hours after the mission’s historic liftoff from Kennedy Space Center’s Launch Complex 39B.

With the successful completion of the burn, the crew—Commander Reid Wiseman, Pilot Victor Glover, Mission Specialist Christina Koch, and Canadian Space Agency (CSA) Mission Specialist Jeremy Hansen—is now on a “free-return” trajectory. This orbital path ensures that the moon’s gravity will naturally pull the spacecraft around its far side and sling it back toward Earth for a Pacific Ocean splashdown, currently scheduled for April 10, 2026.

The Artemis II mission is designed to push the boundaries of human reach. While the Apollo missions of the late 1960s and early 1970s focused on lunar landings, Artemis II is a “shakedown” flight intended to validate the Orion spacecraft’s performance with a human crew. On the sixth day of the mission, the crew is projected to reach a point roughly 4,600 miles beyond the far side of the moon.

At its maximum distance, Orion will be approximately 230,000 miles from Earth. This will eclipse the standing record of 248,655 miles (400,171 kilometers) from Earth set by the crew of Apollo 13 in 1970, who were forced into a high-altitude lunar loop following an onboard explosion. Unlike the emergency nature of the 1970 record, the Artemis II trajectory is a deliberate test of the Space Launch System’s (SLS) precision and the Orion’s ability to sustain life in the harsh radiation environment of deep space.

“Humanity has once again shown what we are capable of, and it’s your hopes for the future that carry us now on this journey around the moon,” Jeremy Hansen said in his first address to Mission Control following the TLI burn. Hansen’s inclusion marks the first time a non-American has traveled beyond low-Earth orbit, a nod to the international coalition-building that defines the Artemis program.

The TLI burn utilized an Orbital Maneuvering System (OMS) engine with a storied pedigree. The engine used for this mission was salvaged and refurbished from the Space Shuttle program, having previously flown on 19 different shuttle missions. This hardware evolution underscores NASA’s strategy of blending legacy technology with modern computing power.

The Orion capsule itself offers a stark contrast to the Apollo-era Command Modules. While the Apollo capsules provided 210 cubic feet of habitable volume for three men, Orion provides 331 cubic feet—a 50% increase—to accommodate its four-member crew. This extra space is critical for the mission’s various objectives, which include testing a $23 million waste management system and exercise equipment designed to prevent bone density loss during longer voyages to Mars.

“With this burn to the moon, we do not leave Earth. We choose it,” Mission Specialist Christina Koch noted before the burn, emphasizing the mission’s role in gathering data to protect the home planet and its future explorers. Koch, who already holds the record for the longest single spaceflight by a woman, is now poised to become the first woman to reach the lunar vicinity.

The Artemis program represents a significant shift in U.S. space policy, moving away from the “flags and footprints” approach of the mid-20th century toward a sustainable lunar economy. This mission is the second of several planned phases, following the uncrewed Artemis I in 2022. It sets the stage for Artemis III and IV, which aim to land the first woman and person of color on the lunar surface later this decade.

However, the program faces intense scrutiny regarding its fiscal and temporal milestones. Originally slated for an earlier launch, Artemis II was delayed due to technical refinements and budget reallocations. The SLS rocket, standing 322 feet tall, carries a per-launch price tag estimated at $2.2 billion, part of a broader program that has seen costs climb into the tens of billions.

The geopolitical stakes are equally high. The United States is currently in a de facto space race with China, which has announced plans to land taikonauts on the moon by 2030. The Artemis Accords, a set of non-binding principles for space cooperation, now boast over 40 signatories, positioning Artemis II as a diplomatic tool as much as a scientific one.

As the crew settles into the “coast” phase of the mission, their daily schedule is packed with system checks. They have already addressed minor issues typical of a test flight, including a brief glitch in the communication system and a small leak in the waste management suction line, both of which were resolved by Mission Control in Houston.

Over the next 48 hours, the crew will focus on optical navigation, radiation monitoring, and CO2 scrubbing to ensure the life-support system effectively filters the air for four active adults over a prolonged period.

As Orion moves further away, the Earth will appear as a shrinking marble in the spacecraft’s windows. For Commander Reid Wiseman and his crew, the next eight days are not just a journey through the vacuum of space, but a bridge between the legacy of the 20th century and the aspirations of the 21st, according to NASA.

Banking Technology Data Breach Affects 672,000 Customers in Ransomware Attack

A ransomware attack on Marquis, a fintech company, has exposed sensitive personal and financial data of over 672,000 individuals, raising concerns about data security in the banking sector.

A recent ransomware attack on Marquis, a Texas-based fintech company, has compromised the personal and financial data of 672,075 individuals. This breach has raised alarms about the security of sensitive information held by third-party companies that support banking institutions.

Marquis, which provides data analytics tools to numerous banks, reported that hackers gained access to its systems in August 2025. The stolen data includes critical information such as names, dates of birth, home addresses, bank account details, debit and credit card numbers, and Social Security numbers. Such a combination of data can facilitate serious identity theft and fraud.

What makes this incident particularly concerning is that Marquis is not a household name, meaning many individuals may not have been aware that their data was stored with the company. The breach highlights the vulnerabilities that can exist within the banking ecosystem, especially when third-party vendors are involved.

In the wake of the attack, Marquis has filed a lawsuit against its firewall provider, SonicWall, alleging that a security flaw may have allowed the attackers to access critical configuration files. According to the lawsuit, these files provided hackers with a detailed map of Marquis’ network, which they exploited to steal data and deploy ransomware.

The lawsuit accuses SonicWall of failing to secure its cloud backup system, which allegedly exposed firewall configuration files, encrypted credentials, and detailed network architecture related to customer environments. Marquis claims that this level of access effectively gave the attackers a blueprint of its defenses. Furthermore, the complaint alleges that SonicWall was aware of the compromise to its cloud backup service but did not promptly disclose the full extent of the breach, initially reassuring customers that firewall protections were intact. This delay hindered Marquis’ ability to take timely protective measures.

In a statement, a spokesperson for Marquis detailed the company’s response to the incident. “In August 2025, Marquis Marketing Services identified a data security incident and immediately enacted our incident response protocols, including proactively taking affected systems offline to protect our data and our customers’ information,” the spokesperson said. “We engaged leading third-party cybersecurity experts to conduct a comprehensive investigation and notified law enforcement.” The spokesperson also noted that SonicWall later clarified that firewall configuration data and credentials associated with all customers using the cloud backup service had been accessed.

Experts warn that the exposure of firewall configuration files can significantly increase the risk of further attacks. These files serve as blueprints that can reveal vulnerabilities within a company’s defenses, allowing attackers to bypass security measures that would typically prevent unauthorized access.

Once inside the network, hackers can copy sensitive data and encrypt systems to demand a ransom. Even if the company manages to restore operations, the stolen data remains a significant threat, as criminals can use it to open credit cards, take out loans, or access bank accounts. Additionally, they can combine this data with other leaks to create convincing scams that may target victims through phone calls, emails, or messages that appear to be from legitimate sources.

Individuals concerned about their data being exposed in this breach are encouraged to take proactive measures to protect themselves against identity theft and fraud. One recommended step is to check if their email addresses have been compromised by visiting the website Have I Been Pwned. This resource allows users to see if their information appears in the recent data leak.

It is also advisable to secure important accounts, such as email and banking, by using strong, unique passwords that include a mix of letters, numbers, and symbols. Avoiding predictable choices, such as names or birthdays, and never reusing passwords can further enhance security. Utilizing a password manager can simplify the process of managing complex passwords and help identify any breaches.

Regularly monitoring financial transactions is crucial. Checking accounts frequently can help detect unauthorized charges early, as criminals often test accounts with small transactions before attempting larger withdrawals. If there is a possibility that a Social Security number has been exposed, placing a fraud alert or freezing credit can provide additional protection against identity theft.

Enabling two-factor authentication (2FA) for banking and email accounts adds an extra layer of security, making it more difficult for unauthorized individuals to access accounts even if they have the password. Keeping devices and applications updated with the latest security patches and installing trusted antivirus software can also help mitigate risks associated with malware and phishing scams.

This breach underscores a growing concern regarding the security of personal data held by third-party companies. As financial data is often shared across a network of vendors, the consequences of a security failure can extend beyond the initial company involved. The ongoing legal battle between Marquis and SonicWall raises important questions about accountability in the cybersecurity landscape, particularly when breaches expose sensitive information of hundreds of thousands of individuals.

As the situation develops, it remains critical for consumers to stay informed and take necessary precautions to protect their personal information. For more information on identity theft protection and data security, resources are available at CyberGuy.com, which offers insights and tools to help individuals safeguard their digital identities.

For further details on this incident, refer to Fox News.

CloudFront Service Disruption Affects Users Globally

The disruption of Amazon’s CloudFront service on October 11, 2023, highlighted vulnerabilities in digital infrastructure, affecting user access to numerous online platforms worldwide.

On October 11, 2023, a significant service disruption impacted users attempting to access various online platforms reliant on Amazon’s CloudFront, a widely utilized content delivery network (CDN). The incident resulted in a 403 error, which indicated that user requests could not be fulfilled, effectively blocking access to essential digital services. This event raises critical questions about the reliability of cloud-based infrastructures, particularly as digital operations become increasingly central to business functionality.

CloudFront, part of Amazon Web Services (AWS), is designed to optimize the delivery of data, applications, and APIs globally by reducing latency and enhancing transfer speeds. However, on this day, the service faced an unexpected surge in traffic, leading to widespread access issues. AWS reports that CloudFront supports millions of websites worldwide, underscoring the importance of its operational stability for businesses that depend on uninterrupted internet access.

The 403 error encountered by many users signifies that access to a resource is forbidden, indicating that CloudFront could not connect to the server hosting the requested application or content. This situation can arise from various factors, including server misconfigurations, excessive traffic loads, or issues with the origin server that CloudFront was trying to reach. The absence of an immediate explanation from AWS regarding the specific cause of the disruption led to speculation about the incident’s nature and its implications for users and businesses alike.

While the precise extent of the outage remains unclear, its potential impact is significant. Businesses utilizing CloudFront for service delivery could experience revenue losses, increased customer dissatisfaction, and reputational damage. Affected sectors included e-commerce, news media, and entertainment, where timely access to services is crucial. This incident serves as a stark reminder of the fragility inherent in cloud infrastructures, especially as reliance on such services continues to grow.

Historically, there have been several notable instances of severe outages in cloud services that resulted in widespread disruptions. For example, a similar AWS outage in June 2021 caused interruptions for major platforms like Netflix and Reddit. Such incidents have sparked discussions about the vulnerabilities associated with a concentrated reliance on a limited number of cloud service providers. Critics argue that these outages highlight the risks of single points of failure within the digital economy, emphasizing the need for more resilient infrastructure and diversified service strategies.

As users attempted to troubleshoot the access issues, reports indicated that the CloudFront error was not isolated to any single website or service. Instead, failures were reported across a broad spectrum of platforms, suggesting a systemic problem rather than isolated incidents. In response to the disruption, CloudFront’s official documentation advised users experiencing similar issues to check their configurations and optimize server settings for high traffic scenarios. This guidance aims to help mitigate the risks of future outages, but it also reflects the reality that businesses must be proactive in managing their digital infrastructure.

The disruption on October 11 serves as a critical reminder for stakeholders in the tech industry to reassess their reliance on cloud services. As digital traffic continues to surge, implementing fail-safes or alternative solutions may become essential for ensuring operational continuity. Companies could benefit from enhanced monitoring systems and robust contingency plans to address potential service disruptions.

Moreover, this incident could spark a broader conversation about the need for improved infrastructure resilience in the face of increasing digital demands. As businesses and consumers become more dependent on cloud services, the ability of these services to withstand unforeseen traffic spikes will be paramount in maintaining accessibility and reliability. The necessity for diversified cloud solutions, including hybrid approaches that combine on-premises and cloud resources, may become more pronounced in light of this incident.

In conclusion, the CloudFront service disruption on October 11, 2023, not only hindered user access but also underscored the vulnerabilities of heavily relying on a limited number of cloud service providers. As these technologies continue to evolve, the imperative for robust, resilient infrastructure will only intensify, shaping the future of digital accessibility and reliability in our increasingly interconnected world, according to Source Name.

Fake Google Meet Update Allows Hackers to Control Windows PCs

A new phishing scheme exploits a fake Google Meet update page to trick Windows users into granting hackers remote control of their computers.

A recent discovery by cybersecurity researchers has unveiled a sophisticated phishing tactic that targets Windows users through a counterfeit Google Meet update page. This deceptive scheme allows attackers to gain control of victims’ computers without the need for traditional malware or stolen passwords.

The fake update page, designed to resemble an official Google Meet notification, prompts users to click a button labeled “Update now.” However, instead of downloading a legitimate update, this action enrolls the user’s Windows computer in a remote management system controlled by the attackers.

Researchers from Malwarebytes, a cybersecurity firm known for its malware detection and removal software, identified this phishing website. The page employs familiar Google branding and colors, making it appear credible to unsuspecting users. Once a user clicks the “Update now” button, a built-in Windows feature is triggered, leading to a legitimate system window titled “Set up a work or school account.” This window typically appears when an IT department configures a device for an employee.

In this scam, the setup window is pre-filled with information that connects the computer to a remote management server controlled by the attacker. The system points to an online management service hosted on Esper, a legitimate platform used by businesses to manage their devices. If the victim proceeds through the setup process, their computer becomes enrolled in a mobile device management system, granting the attacker the same level of control that a corporate IT department would have over a work laptop.

Security experts note that attackers do not expect all users to complete the enrollment process. Even a small number of successful enrollments can provide enough access to make the campaign worthwhile.

This phishing attack exploits a legitimate Windows feature rather than relying on malware installation. Windows includes a device enrollment feature that allows companies to connect employee computers to a management system. Once a device is enrolled, administrators can remotely control various aspects of that machine. In a typical workplace, this functionality aids IT teams in installing software, enforcing security settings, and managing devices. However, attackers have found a way to trick users into joining their management system.

When users click the fake update button, Windows initiates a built-in enrollment process, which appears legitimate and can bypass many security warnings. If users complete the steps, the attacker effectively becomes the administrator of their computer, enabling them to silently install software, modify system settings, access files, lock screens, or even wipe the device entirely. Additionally, the attacker could install further malware at a later stage. Traditional antivirus tools may not detect any issues, as the operating system itself is executing the actions.

In response to inquiries, a Google spokesperson stated, “These ‘update now’ prompts are not legitimate Google communications. This is a phishing campaign that attempts to trick users into a Windows device enrollment process. Google Meet updates are handled automatically through your browser or the official app. Google will never prompt you to visit a third-party site to enroll a personal device to receive an update.”

To avoid falling victim to such scams, users are advised to exercise caution when encountering messages that prompt updates. It is essential to verify the legitimacy of such requests before proceeding. Major platforms rarely require updates through random web pages; legitimate Google Meet updates occur automatically through the browser or the official app and do not necessitate visiting third-party sites.

Users should always check the URL bar to ensure they are on the official Google Meet site, which is meet.google.com. A genuine update will not attempt to enroll an entire computer or trigger system-level setup screens. If such a prompt appears unexpectedly, it is likely a scam. Instead, users should access the service directly from its official website or app to check for updates.

On a Windows computer, users can navigate to Settings, then Accounts, and look for “Access work or school.” If they see an unfamiliar account or organization listed, especially one they do not recognize, they should disconnect it immediately. This section indicates whether a device has been enrolled in a remote management system.

Cybercriminals often leverage personal information available online to enhance the effectiveness of their phishing attacks. Data removal services can help eliminate personal information from data broker sites, reducing the likelihood of targeted attacks. While this may not prevent this specific phishing tactic, it can make individuals harder targets overall.

Google’s AI protections in Gmail block over 99.9% of spam, phishing, and malware, but scams can still reach users through search results, ads, or links shared outside their inbox. Therefore, employing robust antivirus software with real-time protection can help detect suspicious behavior that may arise after an attacker gains control of a device. Although this phishing attack utilizes legitimate Windows features, security tools can still identify unusual system changes or malicious software installed afterward.

Keeping software up to date is crucial, as updates often include security enhancements that help block new attack methods. Running the latest version of Windows and web browsers reduces the risk of attackers exploiting older system vulnerabilities.

Using a password manager can also enhance security by ensuring that login details are only autofilled on legitimate websites. If users encounter a phishing page masquerading as a service like Google Meet, their password manager will not fill in their information, serving as a warning that something is amiss.

If a Windows system window unexpectedly appears, asking users to set up a work or school account, they should stop immediately. Legitimate setup prompts typically arise when configuring a device or following employer instructions, not from clicking on random websites. If such a window appears without prior expectation, it should be closed immediately.

As cybercrime evolves, attackers increasingly exploit legitimate features embedded within operating systems and cloud services. In this instance, both Windows device enrollment and the management platform used are genuine tools designed for business use, which attackers have redirected toward unsuspecting individuals. This highlights the ease with which powerful enterprise features can be repurposed for malicious purposes in the absence of adequate safeguards.

For further information on this phishing scheme and to stay updated on cybersecurity best practices, visit CyberGuy.com.

Harvard Physicist Suggests Interstellar Object May Be Alien Probe

Harvard physicist Dr. Avi Loeb suggests that the interstellar object 3I/ATLAS may be an alien probe due to its unusual characteristics and trajectory.

A recently discovered interstellar object, designated 3I/ATLAS, has sparked intrigue among astronomers and scientists alike. Harvard physicist Dr. Avi Loeb posits that the object’s peculiar features could indicate it is more than a typical comet, potentially serving as a reconnaissance mission from an extraterrestrial source.

3I/ATLAS was first identified in early July by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Chile. This marks only the third time an interstellar object has been observed entering our solar system, according to NASA.

While NASA has classified 3I/ATLAS as a comet, Dr. Loeb points out that an image of the object reveals an unexpected glow in front of it, rather than the typical tail that comets exhibit. “Usually with comets, you have a tail, a cometary tail, where dust and gas are shining, reflecting sunlight, and that’s the signature of a comet,” he explained. “Here, you see a glow in front of it, not behind it.” This anomaly has raised questions about the object’s true nature.

Measuring approximately 20 kilometers across, 3I/ATLAS is larger than Manhattan and is notably bright given its distance from the sun. However, Dr. Loeb emphasizes that the most striking aspect of the object is its trajectory. He notes that if one were to imagine objects entering the solar system from random directions, only one in 500 would be aligned so precisely with the orbits of the planets.

Furthermore, 3I/ATLAS is expected to pass near Mars, Venus, and Jupiter, which Dr. Loeb argues is highly improbable to occur by chance. “It also comes close to each of them, with a probability of one in 20,000,” he stated.

The object is projected to reach its closest point to the sun, approximately 130 million miles away, on October 30. Dr. Loeb expresses the potential implications of the object’s technological origins, stating, “If it turns out to be technological, it would obviously have a big impact on the future of humanity. We have to decide how to respond to that.”

In a related context, earlier this year, astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics mistakenly identified a Tesla Roadster, launched into orbit by SpaceX CEO Elon Musk seven years ago, as an asteroid.

As the scientific community continues to analyze 3I/ATLAS, the implications of its characteristics and trajectory remain a topic of significant interest and debate. The possibility of it being an alien probe invites further investigation and discussion about our understanding of interstellar objects.

A spokesperson for NASA did not immediately respond to requests for comment regarding the findings and implications surrounding 3I/ATLAS, according to Fox News Digital.

CoreWeave Secures $8.5 Billion Loan for AI Infrastructure Growth

CoreWeave has secured an $8.5 billion loan to enhance its AI cloud infrastructure, reflecting strong market confidence in the growing demand for artificial intelligence.

CoreWeave, a cloud infrastructure specialist, has announced the acquisition of a delayed-draw term loan facility of up to $8.5 billion aimed at scaling its AI cloud infrastructure. The initial draw from this facility is approximately $7.5 billion, with an option to increase the total to $8.5 billion as the company stabilizes its data center assets.

The seven-year loan, which matures in March 2032, was arranged by Morgan Stanley and MUFG, with Blackstone Credit’s Insurance serving as the anchor. This significant financing milestone is part of a broader $28 billion raised by CoreWeave over the past 12 months, underscoring the strong market confidence in the demand for AI technologies.

CoreWeave plans to utilize the funds to fulfill major AI contracts and accelerate the expansion of its infrastructure. Brannin McBee, co-founder of CoreWeave, expressed pride in partnering with leading financial institutions for this landmark transaction, stating, “This reflects confidence in AI adoption and market validation of our model.”

The loan features a SOFR-based floating tranche at SOFR+2.25% and a fixed-rate tranche at approximately 5.9%. Specific covenants related to the loan were not disclosed.

Since completing its initial public offering (IPO) in March 2025, CoreWeave has rapidly expanded its operations, including a recent investment in a data center in the United Kingdom. The company reportedly holds an 18% share of the dedicated AI GPU market. This financing comes at a time when capital spending on AI infrastructure is experiencing a boom, with Bank of America and Reuters noting that U.S. data center investments have reached record highs as major tech companies invest billions into AI.

CoreWeave faces competition from both hyperscale cloud providers and smaller GPU-focused companies. For instance, Lambda Labs raised $480 million in early 2025 and secured a $500 million GPU-backed loan, while Crusoe Energy recently closed a $350 million Series C funding round and obtained $200 million in asset-backed financing.

However, high leverage poses risks, particularly if demand for AI slows or if supply chain disruptions affect GPU deliveries. CoreWeave will need to deploy its equipment swiftly to service contracts and manage debt refinancing as it continues to expand. The company’s next steps include drawing on the loan facility in the coming quarters to fund data center construction and chip purchases. Its progress will be closely monitored in relation to competitors and the broader AI market cycle.

According to American Bazaar, this loan marks a significant step for CoreWeave as it positions itself to meet the increasing demands of the AI sector.

FBI Email Hack Highlights Importance of Securing Technology

The recent hacking of FBI Director Kash Patel’s personal email highlights the urgent need for individuals to strengthen their cybersecurity practices.

In a concerning incident, the personal email account of FBI Director Kash Patel was hacked, with the Iranian group known as the Handala Hack Team claiming responsibility. While the FBI confirmed that no classified data was compromised, the breach underscores a significant vulnerability in personal cybersecurity.

The breach involved the unauthorized access to Patel’s personal email, revealing sensitive information such as photos, travel details, and older messages dating back over a decade, from 2011 to 2022. Although the FBI did not attribute the attack to a specific nation, the Handala Hack Team has publicly taken credit for the incident.

The FBI emphasized that no government or classified data was involved in this breach. In response to the threat posed by the Handala Hack Team, the U.S. State Department is offering a reward of up to $10 million for information leading to the identification of its members. Despite reaching out for comments, CyberGuy did not receive a response from the FBI before the article’s deadline.

A cybersecurity expert described the exposed material as akin to a “personal junk drawer,” a metaphor that resonates with many individuals who may have similar vulnerabilities in their own email accounts. The incident serves as a stark reminder that if even the head of the FBI can fall victim to hackers, ordinary users are equally at risk.

U.S. officials have long warned that foreign government-linked hackers, particularly those associated with Iran, have been targeting American citizens, especially those involved in government or political activities. Such cyberattacks often escalate during periods of geopolitical tension. Previous targets have included individuals connected to the Trump administration, as well as private companies, such as a recent incident involving a U.S. medical device company that faced operational disruptions due to hacking.

The shift in cyber warfare tactics is evident: personal accounts are now prime targets for hackers. This is largely because personal email accounts tend to have weaker security measures compared to official government systems. Many users rely on reused passwords, outdated security practices, and old email accounts, making them easier targets for malicious actors.

Once hackers gain access to an email account, they can exploit the information for various malicious purposes, potentially compromising not just the account itself but also associated accounts and personal data.

To mitigate these risks, individuals are encouraged to adopt stronger cybersecurity habits. One of the most effective defenses is enabling two-factor authentication (2FA) on email accounts. This additional layer of security requires a second code, making it significantly more difficult for hackers to gain access even if they have stolen a password.

It is also crucial to avoid reusing passwords across multiple accounts. A single breach can jeopardize an entire digital life. Utilizing a password manager to create unique passwords for each account can enhance security significantly.

Moreover, users should regularly review and delete unnecessary emails and documents that contain sensitive information, such as financial details or travel plans. Important files should be moved to secure locations rather than left in an inbox, which can be a tempting target for hackers.

As cyberattacks become increasingly sophisticated, hackers can leverage stolen data to craft convincing phishing emails that appear legitimate. Therefore, it is essential to verify links and sender addresses before clicking on any content. Employing robust antivirus software can also provide an additional layer of protection against suspicious activities.

Even with proactive measures, personal information may still be circulating on data broker sites, which collect and sell details like addresses and phone numbers. Using a data removal service can help mitigate this risk by requesting the removal of personal information from numerous sites, thereby reducing the amount of data available to potential attackers.

Keeping devices updated is another critical step in maintaining cybersecurity. Software updates often include patches for known vulnerabilities, and delaying these updates can leave systems exposed to exploitation.

Using different email accounts for various purposes—such as banking, shopping, and personal communication—can limit the damage if one account is compromised. Email aliases can also be beneficial; these alternate addresses forward to a primary inbox and can be disabled if they become a target for spam or hacking attempts.

Another emerging security measure is the use of passkeys, which replace traditional passwords with secure logins tied to devices or biometrics. This method is considered one of the safest ways to protect accounts, as passkeys cannot be reused or phished.

The landscape of cybersecurity is evolving, with adversaries demonstrating their capability to adapt and target both institutions and individuals. However, the most common entry point for hackers remains simple: weak passwords and outdated security practices. This reality emphasizes that the first line of defense against cyber threats is not solely the responsibility of government agencies but also lies with individual users.

As the threat of cyberattacks continues to grow, it is crucial for everyone to take proactive steps to secure their digital lives. For more information on how to enhance your cybersecurity practices, visit CyberGuy.com.

According to CyberGuy, adopting smarter habits today can significantly reduce the risk of falling victim to cyber threats.

Baseball Embraces Robot Umpire Challenges Amid Changing Landscape

Major League Baseball introduces the Automated Ball-Strike Challenge System, allowing players to challenge calls using technology, marking a significant shift in the game’s officiating.

For generations, baseball has adhered to a straightforward rule: the umpire’s call on balls and strikes is final. However, this season, Major League Baseball (MLB) is set to revolutionize the game with the introduction of the Automated Ball-Strike Challenge System, commonly referred to as the “robot ump.” This innovation allows players to challenge an umpire’s call, enabling technology to determine the outcome.

The Automated Ball-Strike Challenge System (ABS) employs advanced camera technology to meticulously track every pitch, creating a digital strike zone tailored to each batter’s height. While the system enhances accuracy, it does not fully relinquish control to machines. Instead, it operates as a hybrid model where human umpires continue to make calls on the field, but players now have the option to challenge those calls if they believe an error has been made.

High-speed cameras strategically positioned around the stadium capture the pitch in three dimensions, measuring its trajectory as it crosses home plate. This data is processed in milliseconds, allowing results to be displayed almost instantly on stadium screens. Scott Jacka, senior director of technology development strategy at T-Mobile, explained that the company’s private 5G network facilitates the rapid transmission of pitch data to the ABS operator, ensuring that results are relayed back to the field without delay.

Each team begins a game with two challenges, which can only be initiated by the pitcher, catcher, or batter—no assistance from the dugout is allowed. Players signal a challenge by tapping their heads, and within seconds, the stadium displays the pitch’s location and whether it was a ball or a strike. If the challenge is upheld, the team retains its challenge; if not, they lose one. This quick process has already become one of the most thrilling aspects of the game, with teams potentially receiving additional challenges during extra innings.

Reliability is a crucial consideration for any new system, and MLB designed ABS to deliver results swiftly, ensuring the game remains uninterrupted. In the event of a malfunction, the human umpire remains the ultimate authority, providing a safety net to maintain the flow of the game.

The technology behind the ABS system is powered by Hawk-Eye Innovations, which is also used in tennis and soccer for line calls and goal decisions. This established technology lends credibility to the system’s accuracy. T-Mobile supports the infrastructure necessary for the rapid delivery of results to both stadium displays and broadcast feeds.

Historically, contentious ball and strike calls have been a part of baseball, often becoming focal points of discussion among fans and players alike. However, as technology advances, there is a growing impatience with mistakes that could be easily rectified. MLB views the ABS system as a means to alleviate frustration without entirely removing the human element from the game.

The introduction of challenges adds a layer of tension to the game, as fans and players alike await the outcome of each call. Instead of prolonged debates over disputed calls, the ABS system provides immediate clarity, transforming potential controversies into moments of drama.

Early testing has revealed that the timing of challenges can be more critical than the specific calls being challenged. Players who use their challenges too early may find themselves at a disadvantage later in high-pressure situations. Emotions can also play a role, leading to impulsive decisions that could cost teams in crucial moments.

Not every pitch is straightforward to challenge. High-velocity pitches and those with significant movement can be particularly challenging to judge in real time. Even seasoned players may misinterpret a pitch by mere inches, complicating the decision to challenge.

This dynamic opens the door for players with exceptional plate discipline, such as Juan Soto, to leverage their skills strategically. Conversely, catchers face a shifting landscape; pitch framing—an art where catchers subtly position their gloves to influence the umpire’s call—will not disappear but will evolve as a strategic tool in conjunction with the ABS system.

Pitchers, on the other hand, may be less inclined to utilize the challenge system. Many believe they lack the best perspective on the strike zone during live play. Veteran players like Max Scherzer have raised broader questions about the extent to which technology should influence the game, a debate that remains unresolved.

Beyond officiating, the ABS system generates a wealth of data that teams can analyze in real time. This data can provide insights into pitch accuracy, player tendencies, and challenge success rates, potentially influencing coaching strategies and player evaluations.

While MLB has experimented with fully automated strike zones in the minor leagues, the traditional nature of baseball means many players and fans still value the human element behind the plate. They believe that the personality and judgment of umpires, along with their imperfections, contribute to the sport’s unique charm.

At present, the challenge system represents a compromise, addressing significant officiating errors while retaining the human touch that many cherish. As fans watch games unfold, they may notice a newfound fairness, with pivotal moments less likely to hinge on missed calls. The game is becoming more strategic, as players must weigh the timing of their challenges carefully, knowing that a single misstep could have lasting consequences.

In summary, baseball continues to evolve, integrating technology while striving to preserve its core essence. The robot ump challenge system enhances the game by empowering players to voice their concerns over calls, ultimately shaping a more transparent and engaging experience for fans. As the debate over technology’s role in baseball continues, one question remains: if technology can ensure accuracy, will fans embrace it over the traditional human umpire?

According to CyberGuy, the introduction of the ABS system marks a significant step forward in the evolution of baseball officiating.

Indian-American Satish Jha Discusses Technology and Ideas in Global Boardrooms

Satish Jha, a Boston-based journalist and edtech pioneer, discusses the thoughtful application of technology and its potential for social impact in a conversation reflecting on his diverse career journey.

Technology creates opportunity, but it must be applied thoughtfully, says Satish Jha, a Boston-based journalist, edtech pioneer, and investor who led the One Laptop per Child initiative in India.

Few careers move as seamlessly across journalism, global corporate leadership, investing, and social impact as that of Satish Jha. From co-founding Jansatta, one of India’s most influential Hindi dailies, and editing Dinamaan at the Times of India Group, to serving in CXO roles with Fortune 100 companies in Switzerland and the United States, Jha’s journey spans institutions, geographies, and ideas. In recent years, he has been an early-stage investor in numerous U.S. startups and a driving force behind technology-led social initiatives, including leading One Laptop per Child (OLPC) in India and supporting large-scale education efforts through the Vidyabharati Foundation of America and Ashraya.

Jha is also the author of *The Full Plate: India’s Education Revolution and the Race for Human Capital*, and he contributes a regular column to *The American Bazaar*.

In a wide-ranging conversation with Kesav Dama, Jha reflects on the formative influence of his upbringing and his years at Jawaharlal Nehru University, the bold decisions that helped build a modern Hindi newspaper from scratch, and the evolving role of journalism in an age of social media and misinformation. He also discusses his transition into global corporate leadership, his approach to investing, and his long-standing commitment to using technology to drive social impact—from rural development and digital infrastructure to energy, healthcare, and education.

At its core, the conversation returns to a few enduring themes: the power of ideas when paired with execution, the importance of humanizing technology, and the belief that while circumstances shape opportunity, they need not define outcomes. The interview has been edited for clarity.

Kesav Dama: You were born in Bihar and spent time in Lucknow and Varanasi. Tell us about your upbringing—especially your parents and their influence on you.

Satish Jha: My upbringing was shaped by two very different yet complementary influences. On my father’s side, there was a strong emphasis on education and scholarship. My grandfather was a professor of Sanskrit, and even though my father lost him at a very young age, that intellectual tradition continued in our household.

On my mother’s side, the family had a more aristocratic background—there were administrators, lawyers, and professionals of various kinds. It was a family that valued leadership and public life. So, in a way, I grew up at the intersection of intellectual rigor and social awareness. One side grounded me in discipline and learning; the other exposed me to ambition and public engagement. That combination stayed with me throughout my life.

Kesav Dama: Do you agree with the idea that where and when you are born largely determines your future?

Satish Jha: I would say it determines a significant part of it—perhaps 70-80 percent. Your environment, access, and early influences shape your opportunities. But I don’t think it is destiny. There is still room for agency, for effort, and for making choices that alter your trajectory.

Kesav Dama: You studied economics at Jawaharlal Nehru University (JNU) in the late 1970s. What was that experience like?

Satish Jha: At the time I was there, JNU was probably one of the most extraordinary academic environments in India. It brought together an incredibly talented group of students and thinkers. To give you a sense of that ecosystem—people from my extended academic circle went on to become global leaders. Abhijit Banerjee, who later won the Nobel Prize in Economics, was part of that intellectual milieu. Others went on to lead major institutions, join policymaking bodies, or build global corporations. JNU was not just about academics. It was about exposure to ideas—politics, economics, philosophy—and learning how to question, debate, and engage. That environment shaped how we thought about the world.

Kesav Dama: You’ve consistently worked at the intersection of technology and social impact. Why is that important to you?

Satish Jha: Technology, by itself, is just a tool. What matters is how societies absorb and use it. Different societies exist at different stages of development. Some create cutting-edge technologies, while others are still trying to absorb earlier innovations. Progress depends on how effectively a society can adopt and apply technology. If technology is too advanced for a society to absorb, it has little impact. If there is no access to technology at all, progress stalls. So the key is alignment—using the right level of technology to drive meaningful social outcomes. Technology is necessary for progress, but it is not sufficient. It must be humanized. It must serve people.

Kesav Dama: You co-founded a Hindi daily and scaled it rapidly. What were the key decisions that drove that success?

Satish Jha: I came into journalism without prior experience, which, in hindsight, was an advantage. I had no preconceived notions and was willing to experiment. One of the most important decisions we made was to adopt computers for publishing. At that time, no newspaper in India was fully composed using computers. We took that leap despite not knowing exactly how to implement it. The second key decision was about language. We chose to write in a way that ordinary people spoke—not in overly formal or translated Hindi. That made the newspaper accessible. We also focused on presentation—better layout, better readability, and a modern look. Combined with strong content and distribution support, it helped us stand out. In short, we were willing to take risks others were not willing to take.

Kesav Dama: How do you see the difference between traditional journalism and today’s social media-driven landscape?

Satish Jha: Journalism and social media are fundamentally different. Journalism is an institution. It operates within a framework of accountability, standards, and professional norms. Journalists are trained, and their work is subject to scrutiny. Social media, on the other hand, is a platform for expression. Anyone can publish anything. That democratization has value, but it also creates challenges—especially around misinformation.

Today, the biggest issue is not access to information—it is the ability to distinguish between what is real and what is not. Even I find myself questioning what I see. However, over time, people will adapt. They will learn to ask questions, verify sources, and use tools—including AI—to check authenticity. Progress is never linear. It is messy, but it moves forward.

Kesav Dama: With so much free content available, how can journalism remain financially viable?

Satish Jha: Journalism survives where there is demand. If people value credible information, they will pay for it—directly or indirectly. The challenge today is that attention is fragmented. But credibility still matters. In the long run, institutions that build trust will endure.

Kesav Dama: How did you transition from journalism into global corporate leadership?

Satish Jha: That transition happened largely because of circumstances and opportunities. When my wife moved to Geneva for her work with global health initiatives, I relocated as well. While there, I pursued further education and began exploring opportunities. I received offers from major global organizations, including leadership roles in technology and strategy. I chose a path that allowed me to work internationally and engage with global markets. One of my guiding principles was simple: if you give me a dollar, I will return more than a dollar. That mindset helped build trust.

Kesav Dama: You later moved into investing and entrepreneurship. How did that evolve?

Satish Jha: After years in corporate leadership and consulting, I began to understand how businesses are built and scaled. That naturally led to investing. I started investing in early-stage companies—particularly those working on technologies that could create new possibilities or make things cheaper, faster, or better. Over time, I made dozens of investments. Some succeeded, some didn’t. That’s the nature of early-stage investing. For me, investing is not just about returns. It is about people, ideas, and the potential to create impact.

Kesav Dama: What do you look for when deciding whether to invest in a startup?

Satish Jha: There are a few key criteria: sustainability, scalability, profitability potential, and impact. But beyond all that, it comes down to people. Do I believe in the founders? Do I understand the space? Does it excite me?

Kesav Dama: You’ve been involved in rural development initiatives since a young age. How did that shape your later work?

Satish Jha: I started working in rural areas when I was about 16 or 17. It wasn’t driven by a grand plan—it was more of an instinct to contribute. Later, when I worked on initiatives like Digital Partners India, the idea was to use technology to bridge gaps—especially where physical infrastructure was lacking. We talked about “digital highways” instead of physical roads. That idea later influenced various models adopted by corporations and governments.

Kesav Dama: You’ve been associated with ideas that resemble today’s digital infrastructure systems in India. How do you view that evolution?

Satish Jha: The core idea was always about simplifying access—using technology to connect identity, finance, and services. There are many ways to build such systems. Some are more efficient than others. What matters is usability, scalability, and cost-effectiveness. India has made significant progress, but there is always room for simplification.

Kesav Dama: Tell us about your work in energy and healthcare for underserved communities.

Satish Jha: In energy, we worked on decentralized systems—using biomass and local resources to generate power. The goal was to create small, self-sustaining units that could serve rural communities. In healthcare, we focused on digitizing patient data. We built systems where doctors could access a patient’s history through a digital platform—something that seems obvious today but was quite innovative at the time. Both efforts were about leveraging technology to solve real-world problems.

Kesav Dama: What is your vision for the future of education in India?

Satish Jha: Education is the single most powerful lever for societal transformation. The issue in India is not just access—it is quality. A large percentage of students are not receiving education that equips them for the future. The solution is not necessarily more spending—it is smarter spending. Technology can reduce costs and improve outcomes, but it must be applied effectively. If we invest meaningfully in education, the economic impact could be transformative.

Kesav Dama: You’ve mentored many entrepreneurs. What drives that?

Satish Jha: At this stage of my life, I feel a responsibility to contribute. I don’t look at mentorship as a structured activity. I engage where I feel I can make a difference—where my experience can help someone move forward. It’s not about scale. It’s about impact.

Kesav Dama: You’ve been closely associated with TiE. How do you see its role today?

Satish Jha: TiE has played an important role in building the startup ecosystem, especially in early-stage investing and mentorship. But ecosystems evolve. New institutions emerge to address new needs. TiE remains relevant, but it is part of a larger, multi-layered ecosystem.

Kesav Dama: How did you get involved with the One Laptop Per Child initiative?

Satish Jha: I was introduced to the initiative and felt it was being misunderstood—especially in India. I reached out, got involved, and eventually took responsibility for driving it in India. It was an extraordinary experience—both in terms of learning and impact. Not everything scaled the way we hoped, but the idea was powerful.

Kesav Dama: If you had to summarize your journey and message, what would it be?

Satish Jha: The message is simple: you can do it. Where you come from matters, but it does not define your limits. Technology creates opportunities, but it must be applied thoughtfully. And ultimately, progress happens when people connect ideas with action.

The interview highlights Jha’s belief in the transformative power of technology when used responsibly and effectively, underscoring the importance of human-centered approaches in driving social change, according to The American Bazaar.

Roblox Enhances Online Safety Measures Through Artificial Intelligence

Roblox is implementing a real-time AI moderation system to enhance online safety by analyzing avatars, text, and environments simultaneously across its platform.

Roblox, a popular online platform with over 144 million daily users, is introducing a new real-time AI moderation system aimed at detecting harmful content. This innovative approach analyzes avatars, text, and environments together, addressing the complexities of moderation in a user-generated ecosystem.

Unlike traditional moderation tools that evaluate individual elements in isolation, Roblox’s new system employs what is known as multimodal moderation. This method assesses the entire scene from the user’s perspective, capturing the interplay between 3D objects, avatars, and text in real time. Matt Kaufman, Roblox’s chief safety officer, explained the significance of this shift, stating, “We already moderate all of the objects in a virtual world, but how they come together and interact has long been a challenge.”

The challenge of moderation arises from the fact that harmful content can often be subtle and context-dependent. Kaufman noted, “Traditional AI moderation systems, which moderate one object at a time, can lack context and miss combinations that could be problematic in ways that the individual items are not.” This new system aims to fill that gap by understanding the relationships between different objects and how they interact, thus catching nuanced violations that standard filters might overlook.

Roblox’s multimodal moderation system is particularly focused on scenarios that have historically slipped through the cracks. For instance, in games that allow free-form drawing or avatar customization, a drawing or an avatar may seem harmless on its own. However, when combined, they could create inappropriate content. Kaufman elaborated, “The system can detect combinations of objects that may violate our community standards,” allowing for a more comprehensive assessment of user-generated content.

Currently, the implementation of this system is already yielding significant results, with Roblox reportedly shutting down around 5,000 servers daily for violations. Kaufman emphasized the scale of the platform, stating, “With 144 million users connecting and creating on Roblox every single day, our safety systems must be as agile and dynamic as our creators themselves.”

While the new system is designed to act swiftly against harmful behavior, Kaufman acknowledged that no system is entirely foolproof. “We are committed to doing our best to stay ahead of those attempting to bypass safety protocols,” he said, adding that the goal is to scale the multimodal system to monitor 100% of playtime.

For parents, this proactive approach to safety is a significant development. Instead of waiting for reports of inappropriate behavior, the system actively works in the background to identify and shut down problematic servers in real time. Kaufman reassured parents, “We want them to know that we aren’t just reacting to reports; we are proactively building some of the most sophisticated AI moderation systems in the world to help protect their children in real time.”

Roblox also emphasizes the importance of parental involvement in online safety. Parents are encouraged to engage with their children about the games they play and the people they interact with. Simple steps, such as reviewing account settings and discussing screen time rules, can further enhance safety.

Addressing concerns about false positives, Kaufman explained that Roblox is continuously evaluating the accuracy of its multimodal moderation system. “We have a continuous evaluation loop set up to measure false positives from the multimodal moderation system,” he said, indicating that user feedback plays a crucial role in refining the system.

Despite the reliance on advanced AI, Roblox maintains that human oversight remains essential. The platform employs a combination of AI and safety experts to review content before it is made available to users. The new system serves as an additional layer of protection, rather than a replacement for existing safety measures.

As with any powerful technology, questions about privacy and data usage arise. Roblox assures users that data collected for safety purposes is strictly limited to that function. The company is also committed to ensuring fairness and transparency in its safety systems, providing creators with insights into server shutdowns through a new dashboard feature.

Looking ahead, Roblox aims to enhance its moderation capabilities further, including the detection of recreations of real-world events that may violate community standards. Kaufman noted the importance of context in moderation, stating, “Standard filters might see a specific building or a line of text in isolation and not recognize a violation.” The goal is to understand the relationships between environments, avatars, and accompanying chat to improve safety.

This shift in approach represents a significant evolution in how online platforms manage safety. Rather than merely reacting to incidents after they occur, Roblox is striving to prevent harmful behavior before it reaches users. As AI continues to play a larger role in moderating online interactions, the balance between safety, fairness, and user freedom will become increasingly complex.

As the conversation around AI moderation evolves, it raises important questions about the level of control we are comfortable relinquishing to technology. For now, Roblox’s commitment to enhancing online safety through innovative AI solutions marks a promising step forward in creating a safer digital environment for its users.

According to CyberGuy, the implementation of this system is just the beginning, with future developments aimed at further refining the balance between safety and user experience.

Shatabdi Sharma Appointed Chief Information Officer at Capacity

Shatabdi Sharma has been appointed Chief Information Officer at Capacity LLC, where she will lead the company’s global technology strategy and oversee engineering teams in the U.S. and India.

Shatabdi Sharma, an Indian American technology executive, has joined Capacity LLC as the Chief Information Officer (CIO). In her new role, she will spearhead the company’s global technology strategy and manage engineering teams based in both the United States and India.

Sharma’s appointment comes at a pivotal time when logistics providers are increasingly investing in technology, data, and automation to navigate the complexities of retail and e-commerce distribution. Capacity, a leading fulfillment and logistics provider for high-growth consumer brands, views her leadership as a significant step in enhancing its operational capabilities.

According to a news release from the North Brunswick, New Jersey-based company, Sharma will concentrate on fortifying Capacity’s technology infrastructure, enhancing data and analytics capabilities, and ensuring the scalability of its systems.

With over two decades of experience in enterprise technology transformation across various sectors, including retail, consumer goods, and global supply chains, Sharma brings a wealth of knowledge to her new position. Most recently, she served as the Brand Technology Leader for Calvin Klein at PVH Corp, a global apparel company known for its brands like Calvin Klein and Tommy Hilfiger. In that role, she was instrumental in modernizing the brand’s end-to-end value chain, which encompasses product design, development, and planning through to delivery across a distributed global supply chain.

Sharma’s tenure at PVH also included roles as Vice President of Global Application Services and Director of Global E-commerce, where she led enterprise platforms that supported e-commerce, supply chain operations, and global business systems. Her previous experience includes technology leadership positions at Hitachi Consulting, Canon, Wegmans, and Home Depot, where she played a key role in modernizing ERP, warehouse management, order management, and integration systems across complex international operations.

In her new role at Capacity, Sharma aims to leverage the company’s strong foundation of operational expertise and institutional knowledge in fulfillment. “My focus is on building the technology strategy that amplifies that strength by integrating data, modern cloud infrastructure, and intelligent systems that allow us to scale while continuing to deliver transparency and efficiency for our partners,” she stated.

As CIO, Sharma will prioritize initiatives that unify data across systems, enhance analytics capabilities, and expand the use of emerging technologies, including AI-driven automation. Her strategic roadmap also emphasizes ongoing investments in security, governance, and workforce upskilling to ensure that the company’s technology teams are well-prepared for the next phase of growth.

Jeff Kaiden, Chief Executive Officer at Capacity, expressed confidence in Sharma’s capabilities, stating, “Shatabdi brings a rare combination of enterprise technology leadership and hands-on supply chain experience. Her perspective helps ensure our technology strategy continues to support the operational realities of fulfillment while positioning Capacity for the next generation of data-driven logistics.”

Sharma has also highlighted the importance of responsible technology adoption in Capacity’s approach. “AI and automation present tremendous opportunities, but they must be implemented thoughtfully,” she remarked. “At Capacity, we are focused on using technology to empower our teams and deliver better insights for our clients while maintaining strong governance and security practices.”

Beyond her technical expertise, Sharma is a passionate advocate for mentorship and diversity in the technology sector. She is actively involved with Extraordinary Women in Tech (EWiT) and has received several accolades, including the 2025 Top 20 Women We Admire Award and the ISG Women in Digital Silver Luminary Award.

Sharma holds a Master of Science in Computer Science, with a focus on Artificial Intelligence, from Utah State University, as well as a Bachelor of Engineering from Barkatullah University in Bhopal, India.

This appointment marks a significant milestone for Capacity as it continues to enhance its technological capabilities in the logistics industry, according to The American Bazaar.

Reddit VP Durgesh Kaushik Resigns to Launch Modveon, Secures $10M Funding

Durgesh Kaushik, former Vice President of Product at Reddit, has resigned to co-found Modveon, a startup focused on digital infrastructure, securing $10 million in initial funding.

Durgesh Kaushik, who served as Vice President of Product at Reddit for three and a half years, has announced his resignation to co-found a new venture named Modveon. This startup aims to address critical challenges in digital infrastructure for the future.

In a personal update shared on LinkedIn, Kaushik reflected on his time at Reddit, describing it as a period filled with significant learning and impactful experiences. He expressed gratitude to key figures at the company, including Pali Bhat, Steve Huffman, and Jen Wong, for their support and partnership. “Leading Product and International Growth at Reddit has been a masterclass in scale,” he stated, adding that he takes pride in helping make Reddit relevant to millions around the globe.

Kaushik’s departure marks a transition toward entrepreneurship, as he focuses on what he perceives as one of the most pressing challenges of the coming decade. He noted, “The internet is world-class at distribution, but the systems underneath it are still version 1.0. Identity is fragmented. Communication is noisy. Coordination is harder than it should be. Money movement is still far too broken in too many places.”

Modveon is positioned as a “verified operating system for modern nation-states and citizens,” aiming to fill gaps in identity, coordination, and financial systems. The startup has successfully raised $10 million in funding from investors, including Coinbase Ventures and Firebolt Ventures.

Kaushik explained the timing of the venture by highlighting the convergence of emerging technologies. “AI is becoming a new interface layer for how people navigate the digital world, and stablecoins are creating new rails for how value moves,” he wrote. He emphasized that both technologies become significantly more effective when built on trusted and verified systems, rather than fragmented ones.

He is co-founding Modveon alongside Nana Murugesan, who serves as CEO. The two share a long professional history, having previously worked together at Snapchat and Coinbase. “From our days scaling Snapchat to our time at Coinbase, we’ve built a decade of trust. There is no one I’d rather build with from the ground up,” Kaushik remarked.

Murugesan echoed Kaushik’s sentiments in a public response to the announcement. “Grateful to be building this with you Durgesh! We have done a lot together over the last decade, now we build what the next decade will run on. Excited for what’s ahead at Modveon,” he stated.

In the meantime, Steve Huffman, CEO of Reddit, has indicated that the company is looking to ramp up hiring of recent college graduates. This comes as parts of the tech sector pull back on entry-level recruitment amid the growing use of AI tools. Speaking on the Sourcery with Molly O’Shea podcast, Huffman noted, “The kids coming out of college right now learned how to program with AI. They’re really good at it, and so I think we will go heavy on new grads, because they’re so much more AI native.”

Kaushik’s move to launch Modveon represents a significant shift in his career, as he seeks to innovate within the digital landscape. His vision for the startup reflects a commitment to addressing foundational issues that have long plagued the internet.

According to The American Bazaar, the future of Modveon appears promising as it embarks on this ambitious journey.

Air Taxis Expected to Launch in the U.S. This Summer

New federal initiatives may pave the way for air taxis to operate in select U.S. cities as early as summer 2026, marking a significant step toward integrating electric vertical takeoff and landing (eVTOL) aircraft into everyday airspace.

For years, the concept of air taxis has lingered in the realm of futuristic technology, often described as “almost here.” With sleek designs and promises of quiet flights, lower costs, and the ability to bypass traffic, the anticipation has been palpable. However, the reality of air taxis may soon shift from concept to reality, thanks to a new federal initiative that could see electric air taxis taking to the skies as early as this summer.

This initiative represents the first program of its kind aimed at integrating air taxis into everyday U.S. airspace. While operations will not be widespread or fully scaled initially, the program is set to establish a foothold for air taxi services in various locations across the country.

Air taxis, also known as eVTOLs (electric vertical takeoff and landing vehicles), are small electric aircraft designed to take off and land vertically. They promise to transport passengers over short distances within urban areas, potentially allowing individuals to skip traffic and travel from one part of a city to another in mere minutes.

The appeal of air taxis is clear, but the journey to their introduction has been fraught with challenges. The primary obstacle has not been technological; rather, it has been regulatory. The Federal Aviation Administration (FAA) mandates that commercial aircraft adhere to stringent safety standards, with failure rates expected to align more closely with those of commercial airlines than with automobiles.

This regulatory landscape poses a challenge for eVTOLs, which are fundamentally different from traditional aircraft. Their unique design allows for vertical takeoff and landing, followed by a transition into forward flight, adding layers of complexity and risk. Companies such as Joby Aviation and Archer Aviation have invested years in testing their aircraft, logging thousands of flights, yet full regulatory approval has remained elusive.

In response to these challenges, the government has introduced the eVTOL Integration Pilot Program (eIPP), aimed at expediting the approval process without compromising safety standards. This program allows companies to initiate limited operations in designated areas rather than waiting for comprehensive nationwide approval. This shift in regulatory approach enables companies to demonstrate safety in real-world conditions and gradually expand their operations.

Eight pilot programs have already been approved across 26 states, creating one of the largest real-world testing environments for next-generation aircraft. These eVTOLs will not only transport passengers but will also facilitate cargo delivery, emergency medical response, and regional transportation. Data collected from these pilot programs will assist the FAA in developing new regulations to safely broaden the use of air taxis across the nation.

“This is the clearest sign yet from the White House, the FAA, and the DOT that bringing air taxis to market in the United States is a real priority,” said Adam Goldstein, founder and CEO of Archer. “We appreciate Secretary Duffy and Administrator Bedford’s leadership and are excited to bring Midnight to the skies of some of America’s largest cities.”

The push for air taxis is not merely about enhancing urban mobility; it is also a response to international competition. Countries like China have already made significant strides in drone technology and air mobility, with companies there conducting commercial passenger flights since 2023. The U.S. aims to reclaim its leadership position in this domain, accelerating innovation across both civilian and military sectors.

Many of the eVTOLs being developed are designed with autonomy in mind. Initially, pilots will be on board during flights, but the long-term vision is to eliminate the need for human pilots. This shift is driven by the desire to reduce weight, lower costs, and enhance scalability. Companies are actively testing automated systems capable of making complex flight decisions in real time, suggesting that the air taxis of the near future may differ significantly from their initial iterations.

While air taxis are unlikely to replace personal vehicles overnight, they could fundamentally alter urban transportation. For residents in major metropolitan areas, air taxis may soon offer a new option that significantly reduces travel time. Additionally, medical flights and disaster response could become faster and more efficient, potentially transforming emergency services.

Initially, rides may come at a premium price, but as the technology matures and demand increases, costs could align more closely with traditional rideshare services. The move toward autonomous air taxis could signal a broader transformation across various modes of transportation.

The timeline for air taxi operations is becoming clearer, with limited flights expected to commence as early as summer 2026. However, this does not imply that consumers will be able to book flights through an app immediately. Initial operations will likely focus on specific areas and applications.

Once the door to air taxi operations opens, expansion is expected to occur rapidly, similar to the trajectories seen with rideshare services and electric vehicles. “The first time I saw a Waymo on the road in San Francisco, it was a big deal. Now, self-driving cars are just part of everyday life there. I believe the eIPP will do the same thing for air taxis,” Goldstein added. “Every safe flight builds towards public acceptance, and we need to build that acceptance in parallel with our certification efforts.”

Air taxis have long been categorized as a technology on the verge of realization. Now, they are poised to enter the realm of practicality. Despite the challenges that remain—such as safety, cost, and infrastructure—the new regulatory approach is set to accelerate progress. As the public begins to experience this mode of travel firsthand, perceptions and expectations are likely to evolve rapidly.

If given the opportunity to bypass traffic and fly across your city in minutes, would you take the leap, or would you prefer to wait and see how others fare? Share your thoughts with us at Cyberguy.com.

According to Fox News.

Srikant Appointed to Lead National Center for Supercomputing Applications

R. Srikant, an IIT Madras alumnus, has been appointed the new director of the National Center for Supercomputing Applications, a leading hub for high-performance computing and data science.

Indian-born engineering scholar R. Srikant has taken the helm as the new director of the National Center for Supercomputing Applications (NCSA), one of the world’s foremost centers for high-performance computing and data science. His appointment marks a significant milestone for the center as it continues to play a crucial role in advancing research in various fields.

Srikant, who holds the Grainger Distinguished Chair in Engineering and is a professor at the University of Illinois Urbana-Champaign, officially assumed his role on January 1, 2026. He succeeds Bill Gropp, the previous director, and also serves as co-director of the C3.ai Digital Transformation Institute, which is a collaborative effort with the University of California, Berkeley.

His journey to leading NCSA began in India, where he established his academic foundation at the Indian Institute of Technology, Madras. After earning his undergraduate degree in 1985, Srikant moved to the United States to pursue advanced studies at the University of Illinois, where he joined the faculty in 1995.

Srikant’s deep connections to both his alma mater and his early education in India have significantly influenced his career, which is characterized by the integration of complex theoretical mathematics with practical technological applications.

His new role at NCSA comes at a critical juncture, as artificial intelligence and extensive data processing are becoming increasingly vital to global research initiatives. NCSA is tasked with providing the infrastructure necessary to support breakthroughs in diverse areas, including genomics and climate modeling.

“I’m very excited to begin this new journey with NCSA,” Srikant expressed. “My focus is on supporting our excellent researchers and staff, strengthening collaboration across the center, and ensuring that NCSA continues to thrive in its research, service, and impact missions.”

NCSA is not unfamiliar territory for Srikant. He previously served as the acting director of operations at NCSA for several months in 2023 and has engaged in numerous research collaborations between his home department and the high-performance computing experts at NCSA.

His research interests encompass a wide range of topics, including artificial intelligence, machine learning, communication networks, quantum computing, and applied probability. Srikant has received significant recognition for his work on the mathematical analysis and design of algorithms for the internet, wireless networks, and data centers. His accolades include the IEEE Koji Kobayashi Field Award for Computers and Communications and the ACM SIGMETRICS Achievement Award. Additionally, he is a fellow of the Institute of Electrical and Electronics Engineers (IEEE).

For Srikant, this new role represents a full-circle moment in a career that began with a degree in Chennai and has now culminated in a leadership position at a premier American computational research institution. His vision for NCSA is poised to drive innovation and collaboration in the rapidly evolving landscape of supercomputing and data science.

According to The American Bazaar, Srikant’s leadership is expected to enhance NCSA’s impact on research and technology development.

Indian-American Researchers Create Tool to Identify AI-Generated Radiology Reports

Three Indian American researchers at the University of Buffalo are developing a tool to detect AI-generated radiology reports, addressing concerns over falsified medical documentation and fraudulent insurance claims.

In an effort to combat the rising threat of falsified medical documentation and bogus insurance claims, a team of researchers from the University of Buffalo (UB) is developing a tool to identify AI-generated radiology reports. This initiative comes in response to the potential dangers posed by AI-generated medical reports, which can impersonate doctors or fabricate injuries in X-ray images, leading to significant issues within the medical and insurance sectors.

The UB team, led by Nalini Ratha, PhD, a SUNY Empire Innovation Professor in the Department of Computer Science and Engineering, believes they have created the first AI system specifically designed to differentiate between radiology reports authored by humans and those generated by artificial intelligence. “With generative AI becoming more capable of producing remarkably convincing radiology reports, there’s a greater risk of fabricated reports being used to falsify medical histories and support fraudulent claims,” Ratha explained.

Ratha emphasized the unique challenges posed by radiology reports, which possess a highly specialized structure, vocabulary, and stylistic norms that make general-purpose detection systems unreliable. “Therefore, our goal was to build a detection framework designed specifically for radiology that can distinguish clinician-written medical documentation from synthetic text before it reaches clinical or insurance workflows,” she added.

The research team, which includes PhD students Arjun Ramesh Kaushik and Tanvi Ranga, presented their findings in a study titled “Detecting Synthetic Radiology Reports Using Style Disentanglement” at the 2025 GenAI4Health workshop during the Conference on Neural Information Processing Systems held in San Diego in December.

As part of their research, the team compiled a dataset comprising 14,000 pairs of radiologist-authored and AI-generated chest X-ray reports. They employed two distinct methods to create the synthetic reports: paraphrasing actual radiologist reports using advanced large language models (LLMs) and generating complete reports directly from chest radiographs using medical vision-language models (VLMs).

This dataset is notable for being the first to integrate both text-based and image-based synthetic radiology reports, marking a significant advancement for trustworthy AI research in healthcare. The samples focused specifically on the findings section of the reports, which captures the radiologist’s detailed analysis and includes extensive domain-specific terminology and descriptive language.

“The findings section is both central to authorship attribution and the one most susceptible to exploitation,” Ratha noted.

The subsequent phase of their study involved developing an authorship-detection framework tailored to operate on this dataset. Although LLMs can replicate clinical terminology, they often struggle to mimic the stylistic characteristics inherent in human-authored radiology reports.

Recognizing this gap, the UB researchers devised a detection model based on BERT–Mamba technology, designed to separate each report’s stylistic features from its underlying clinical content. Their model demonstrated high accuracy and consistency, achieving Matthews correlation coefficient (MCC) scores ranging from 92% to 100% across both text-to-text and image-to-text categories. Furthermore, the framework proved effective in cross-LLM tests, accurately identifying AI-generated reports from models it had not previously encountered.

“What we found is that LLMs tend to write in polished, expansive language, while clinicians prefer concise, direct terms. For instance, radiologists use straightforward terms like ‘heart’ or ‘lung,’ whereas LLMs often opt for more elaborate phrases like ‘pulmonary vasculature.’ This distinction became a clear stylistic signal that our model learned to recognize,” Ranga explained.

Despite the promising results, the research team plans to continue refining both the dataset and the benchmark detection model in preparation for public release. They also envision that as AI systems become increasingly sophisticated and tailored to specific fields like radiology, these tools could significantly alleviate the workload for radiologists.

While the focus of their research is on radiology, Ratha believes the implications extend beyond healthcare. The style-based detection approach developed by the team could also be beneficial in safeguarding industries that are increasingly vulnerable to AI-generated forgeries, fabricated records, and synthetic narratives, including insurance, finance, journalism, education, and the legal profession.

According to The American Bazaar, this innovative research highlights the critical need for reliable detection methods as AI technology continues to evolve and integrate into various sectors.

Three Steps to Secure Your Email and Protect All Accounts

Account takeover fraud can devastate your finances, but implementing three key security measures can help protect your email and associated accounts from criminals.

Criminals no longer need your passwords to access your financial accounts; they simply need your email. This alarming trend has become a significant concern as account takeover fraud continues to rise.

Recently, a friend of mine, Lisa, experienced this firsthand when her PayPal account was drained, followed by her Amazon account, and an attempted breach of her bank account—all within 40 minutes. The criminals did not require her passwords; they only needed access to her email.

Consider the sensitive information that resides in your email inbox. It contains bank statements, medical results, retirement account details, mortgage information, and access to every streaming service and online store you have ever used. Perhaps most concerning is that every password reset link is sent directly to your inbox.

With access to your email, a criminal can easily reset the passwords for your other accounts. They simply visit your bank’s website, click “forgot password,” and enter your email address. The bank sends a reset link to your inbox, which the criminal can access if they are already inside your email. Within minutes, they can breach your Amazon, PayPal, brokerage, and health insurance accounts.

This type of fraud, known as account takeover fraud, cost Americans an estimated $2.7 billion last year. Disturbingly, 81% of victims reported believing they were “pretty careful” about their security before falling victim to this crime.

To safeguard your email, start by changing your password if it is under 16 characters or if you have reused it across multiple accounts. Consider using a password manager like NordPass, which generates complex passwords that are difficult to guess. You only need to remember one master password to access all your accounts securely.

Implementing two-factor authentication (2FA) is another crucial step. Even if someone steals your password, they cannot access your account without a second verification code. However, many people are unaware that SMS text codes can be intercepted through a method known as a SIM swap attack. In this scenario, a criminal convinces a customer service representative at your cell carrier to transfer your phone number to their device, allowing them to receive your “secure” text codes.

To enhance your security, switch to an authenticator app like Google Authenticator, which generates codes directly on your physical device rather than through your carrier. This change can be made in just a few minutes through your email account’s security settings.

Additionally, be mindful of the permissions you grant to third-party applications. Every time you use the “Sign in with Google” option to access a website or app, you may inadvertently give that app access to your email. Some applications can read your messages or even send emails on your behalf. Conduct an audit of your connected apps by visiting myaccount.google.com, navigating to the Security section, and reviewing third-party apps with account access. Revoke access to any apps you do not recognize or actively use.

While your bank may have a fraud department and your credit card may offer zero-liability protection, your email security is solely your responsibility. Taking these steps can significantly reduce your risk of falling victim to account takeover fraud.

In just twenty minutes, you can implement these three essential security measures. Lisa wishes she had taken these precautions during a quiet Sunday afternoon rather than in a state of panic on a Tuesday night.

Your email inbox can either be a secure fortress or an open door. Unlike your front door, it does not require a deadbolt—just strong security practices.

For more tips on staying safe online, visit Komando.com.

Robot Engages in Real-Time Tennis Matches with Human Players

A humanoid robot has demonstrated the ability to play tennis with a human in real time, utilizing AI technology to track and respond to shots without pre-programmed scripts or remote control.

A humanoid robot has made headlines by rallying tennis shots with a human player in real time. This innovative robot operates without a script or remote control, allowing it to react instantly on the tennis court.

Standing at approximately 4 feet tall, the robot features a compact, human-like frame. Developed by Galbot Robotics, a recent video showcased the robot engaging in a series of shots with a human opponent. The underlying technology, known as LATENT, operates on the Unitree G1 platform.

Unlike many athletic robots that follow pre-programmed routines or rely on remote control, this robot reacts dynamically to its human counterpart. It tracks fast-moving tennis balls, adjusts its position on the court, and returns shots with impressive accuracy. The robot is capable of adapting to changing trajectories and unpredictable shots during rallies, demonstrating significant advancements in robotic performance.

Researchers have noted that the robot can sustain long rallies with millisecond-level reaction times and full-body coordination, marking a major leap forward in robotic capabilities.

Training a robot to play tennis presents a complex challenge. Capturing comprehensive data on human gameplay is difficult, prompting researchers to adopt a different approach. Instead of recording entire matches, they concentrated on smaller segments of movement.

Over the course of their research, the team gathered approximately five hours of motion data from five players. These training sessions took place on a compact 10-by-16-foot court, which is more than 17 times smaller than a standard tennis court.

The robot’s ability to play tennis during live rallies is rooted in its learning process. Initially, the system learns individual movements, which are then combined into coordinated sequences. This method allows the robot to improve its performance significantly.

To further enhance its capabilities, the research team trained the model in simulated environments, varying physical conditions such as mass, friction, and aerodynamics. This simulation training enables the robot to adapt to real-world unpredictability, allowing it to respond dynamically rather than adhering to a fixed routine.

In testing, the system achieved an impressive success rate of up to 96% on forehand shots in simulation. In real-world trials, the robot has demonstrated the ability to sustain rallies with a human player and consistently return the ball over the net.

Observing the demonstration, the robot appears competitive, occasionally placing shots strategically away from the human player. This behavior suggests that the robot is capable of more than mere reaction; it indicates early forms of decision-making abilities.

Despite these advancements, there are still limitations. At times, the robot may appear unstable, and its movements are not yet as fluid as those of a trained athlete. Additionally, high or unpredictable shots can still pose challenges. Nevertheless, the progress made thus far is evident.

This breakthrough in robotics extends beyond the realm of tennis. It illustrates how robots can learn complex human skills without the necessity of perfect data. The methodologies employed in this research could potentially be applied to various tasks that lack complete motion data.

The future of robotic capabilities in sports is becoming increasingly clear. Today, the robot is able to rally; tomorrow, it may compete against human players. In the not-so-distant future, robots could train alongside or challenge professional athletes, and exhibition matches between humans and machines may become a regular feature in the sport.

This demonstration highlights the rapid advancements in robotic technology. Robots are no longer limited to following scripts; they can now react, adjust, and compete in real-time scenarios. What once seemed like a distant possibility is now becoming a reality.

The question remains: If a robot could outperform you on the tennis court, would you still be eager to compete, or would you prefer to train alongside it? Share your thoughts with us at Cyberguy.com.

According to CyberGuy, the implications of this technology could reshape not only sports but also various fields that require complex human-like skills.

AI Policy Changes in the U.S. May Impact Indian-American Tech Relations

The Trump administration’s new artificial intelligence framework aims to reshape U.S.-India tech relations by fostering innovation and addressing workforce development in the global AI landscape.

WASHINGTON, DC—The Trump administration has unveiled a national framework on artificial intelligence (AI), a move that could significantly influence Indian talent, IT firms, and policy discussions as the United States seeks to lead the global AI race.

In a six-point plan designed to enhance innovation, safeguard citizens, and reinforce U.S. leadership, the White House expressed its ambition to “win the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people.” The administration has urged Congress to enact this plan into law.

The framework addresses several critical areas, including child safety, economic growth, intellectual property, free speech, innovation, and workforce development. These components are closely intertwined with India’s role in the U.S. technology ecosystem.

“The Administration recognizes that some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children’s wellbeing or their monthly electricity bill,” the White House stated. It emphasized that these concerns “require strong Federal leadership to ensure the public’s trust in how AI is developed and used in their daily lives.”

For Indian-origin professionals, the emphasis on cultivating an “AI-ready workforce” is particularly significant. A substantial number of Indians are employed in U.S. technology sectors. The plan advocates for enhanced training and skills development, asserting that workers should “participate in and reap the rewards of AI-driven growth.”

This policy shift is also crucial for India’s IT services sector, which plays a vital role in supporting global AI systems through engineering and data-related work. The administration aims to eliminate “outdated or unnecessary barriers to innovation” and expedite the adoption of AI across various industries, potentially increasing demand for international tech partnerships.

Moreover, the plan places a strong emphasis on data centers and energy management. The White House remarked, “ratepayers should not foot the bill for data centers,” urging Congress to expedite approval processes. It also encourages companies to generate power on-site, as the expansion of AI infrastructure could impact global supply chains connected to India.

On the matter of intellectual property, the administration seeks a balanced approach. It stated that “the creative works and unique identities of American innovators, creators, and publishers must be respected in the age of AI,” while also asserting that AI systems should have the ability to learn from available data.

The framework further underscores the importance of free speech, with the White House asserting that “AI cannot become a vehicle for government to dictate right and wrong-think.” It calls for safeguards to protect lawful expression from censorship.

Another critical aspect of the plan is the establishment of a single national policy. The administration cautioned that “a patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.” A uniform regulatory system could benefit Indian firms operating across various U.S. states.

The White House has committed to collaborating with Congress to pass this legislation, emphasizing the necessity for the federal government to establish clear national rules for AI.

As governments worldwide race to regulate AI, the United States and China are at the forefront of this competition. The implications of AI are increasingly linked to economic power and national security.

India is also making strides in expanding its AI ecosystem, investing in technology while maintaining flexible regulations. Decisions made in Washington are likely to set global standards, compelling Indian firms and professionals to adapt to these evolving changes.

According to IANS, the developments in U.S. AI policy will have far-reaching effects on international tech collaborations and workforce dynamics.

SEC Concludes Four-Year Investigation into EV Startup Faraday Future

The SEC has officially closed its four-year investigation into electric vehicle startup Faraday Future, marking a significant moment in the agency’s enforcement history.

The United States Securities and Exchange Commission (SEC) has concluded its investigation into electric vehicle startup Faraday Future, a decision that comes after a lengthy four-year probe. The investigation focused on allegations that the company made “false and misleading statements” following its public debut through a merger with a special purpose acquisition company (SPAC) in 2021.

During the investigation, the SEC scrutinized claims made by Faraday Future regarding the sales of its first electric vehicles, which were reportedly fabricated according to at least three whistleblowers who were former employees of the company. The SEC’s inquiry included multiple subpoenas and depositions of former employees and executives throughout 2024 and 2025.

In July 2025, Faraday Future disclosed that the SEC had issued “Wells Notices” to the company and several of its executives, including founder Jia Yueting. A Wells Notice is a formal communication from the SEC indicating that the agency’s staff has found sufficient grounds to recommend enforcement action.

In light of the SEC’s decision to close the investigation, Yueting expressed relief, stating, “We can now put all our energy into strategy execution. Over the past five years, we had to spend a great deal of time, effort, and money on cooperating with the investigation.” Faraday Future also confirmed that the SEC would not pursue any further action against its executives.

Despite the closure of the investigation, it remains unclear whether Faraday Future responded to the Wells Notices issued last year. As of February, the company indicated in regulatory filings that it had not yet done so, although it planned to engage with the SEC to argue that enforcement action was unwarranted.

Additionally, the U.S. Department of Justice (DOJ) had sought information from Faraday Future following the SEC’s initiation of its investigation in 2022. However, the company has referred to this as an “investigation” in its regulatory filings, while there has been no confirmation from the DOJ regarding any ongoing inquiry.

Historically, the SEC tends to pursue enforcement actions after issuing Wells Notices. A study conducted by the Wharton School in 2020 indicated that approximately 85% of targets receiving a Wells Notice ultimately face legal action from the SEC.

In recent years, the SEC has investigated numerous electric vehicle startups that went public via SPAC mergers. While many of these investigations have resulted in settlements, the agency has also dismissed probes into companies like Lucid Motors in 2023 and Fisker in 2025.

As Faraday Future moves forward without the burden of the SEC investigation, the company will likely focus on its strategic goals and the development of its electric vehicle offerings.

According to The American Bazaar, the closure of this investigation marks a pivotal moment for Faraday Future as it seeks to establish itself in the competitive electric vehicle market.

Astronauts Arrive at ISS for Eight-Month Mission After Medical Emergency

Four astronauts have arrived at the International Space Station for an eight-month mission, following a recent medical emergency that led to an early evacuation of some crew members.

Four new astronauts have successfully arrived at the International Space Station (ISS), restoring the facility to full capacity after a recent medical emergency forced an early evacuation of several crew members. The international team, which includes NASA Commander Jessica Meir, launched from Cape Canaveral aboard a SpaceX rocket on Friday, embarking on a journey that lasted approximately 34 hours.

<p”That was quite the ride,” Meir remarked shortly after the launch, as reported by BBC News. “We have left the Earth, but the Earth has not left us.” The launch had been delayed twice prior due to weather concerns.

Joining Meir for the upcoming eight to nine months on the ISS are NASA astronaut Jack Hathaway, France’s Sophie Adenot, and Russian cosmonaut Andrei Fedyaev. Both Meir and Fedyaev are seasoned space travelers, having previously visited the ISS. Notably, Meir participated in the first all-female spacewalk in 2019. Adenot, a military helicopter pilot, is only the second French woman to travel to space, while Hathaway holds the rank of captain in the U.S. Navy.

The spacecraft is expected to autonomously dock with the space station’s Harmony module at 3:15 p.m. CT on Saturday, traveling at a speed of 17,000 mph in Earth orbit. “What an absolutely wonderful start to the day,” said NASA Administrator Jared Isaacman following the launch. “This mission has shown in many ways what it means to be mission-focused at NASA.”

Isaacman also highlighted the recent adjustments made to the crew schedule, stating, “In the last couple of weeks, we brought Crew-11 home early, we pulled forward Crew-12 to the launch date today, all while simultaneously making preparations for the Artemis 2 mission, which its next window will open up in early March.”

This flight marks the 12th crew rotation with SpaceX as part of NASA’s Commercial Crew Program. Crew-12 will engage in scientific investigations and technology demonstrations aimed at preparing humans for future exploration missions to the Moon and Mars, while also benefiting life on Earth.

NASA confirmed that the capsule’s hatch opened at 4:14 p.m. CT after docking with the ISS. “We are so excited to be here and get to work,” Meir stated upon the crew’s arrival. Adenot shared her awe, saying, “The first time we looked at the Earth was mindblowing. … We saw no lines, no borders.”

Prior to the arrival of the new crew, only one American and two Russians remained aboard the ISS, ensuring the station continued to operate smoothly. The medical evacuation that took place in January was a significant event, marking the first such incident in 65 years. NASA reported that a crew member experienced a serious health issue, but the agency has not disclosed the nature of the condition or the name of the astronaut involved, citing medical privacy.

The astronaut who faced the medical emergency, along with three other crew members who had launched together, returned to Earth more than a month earlier than planned after the decision was made to bring them home.

According to The Associated Press, the successful arrival of the new crew marks a significant step forward for the ISS and its ongoing scientific missions.

Fake Google Security Page Can Compromise Your Browser’s Privacy

A new phishing scam impersonating Google is tricking users into installing malware that can steal sensitive information and spy on their devices.

Security researchers have uncovered a phishing scam that masquerades as a Google security check, tricking individuals into installing malware designed to steal two-factor authentication (2FA) codes, track locations, and monitor clipboard data.

The fraudulent page presents itself as a legitimate Google security alert, claiming that users need to enhance their account protection. It guides visitors through a seemingly straightforward setup process aimed at bolstering their security and safeguarding their devices. However, those who follow the instructions may unwittingly install what appears to be a harmless security tool, which, in reality, is a malicious web application capable of spying on their devices.

According to security experts, this malicious app can capture login verification codes, monitor clipboard activity, track GPS location, and reroute internet traffic through the user’s browser. The most alarming aspect of this scam is that it does not exploit any software vulnerabilities; instead, it relies on social engineering to trick users into granting the necessary permissions. Once these permissions are granted, the user’s own browser can be manipulated to serve the attackers’ purposes without their knowledge.

Researchers at Malwarebytes, a cybersecurity firm, recently identified a phishing website that imitates Google’s account protection system. This site, operating under the domain google-prism[.]com, presents a convincing security page that prompts users to complete a brief verification process. Visitors are instructed to undertake a four-step setup to enhance their account security, which purportedly protects their devices from various threats.

During this process, users are asked to approve multiple permissions and install what is claimed to be a security tool. The application installed is actually a Progressive Web App (PWA), which runs through the browser but functions like a native application on a computer. It can open in its own window, send notifications, and perform tasks in the background.

Once installed, the malicious web app can gather contacts, read clipboard information, track GPS location data, and attempt to capture one-time login codes sent to users’ phones. These codes are commonly used for accounts that implement two-factor authentication.

Additionally, the fake security page may offer an Android companion app described as a “critical security update.” Researchers have noted that this app requests an alarming 33 permissions, including access to text messages, call logs, contacts, microphone recordings, and accessibility features. Such extensive permissions enable attackers to read messages, capture keystrokes, monitor notifications, and maintain control over various aspects of the device. Even if the Android app is not installed, the web app alone can still collect sensitive information and operate quietly through the user’s browser.

The effectiveness of this scam lies in its ability to mimic trusted sources. Many individuals expect security alerts from the services they utilize, particularly regarding the protection of their email or cloud accounts. Attackers exploit this trust by presenting the fake page as a beneficial security feature. When users approve the permissions and install the web app, they inadvertently grant attackers access to specific areas of their devices. One of the primary targets for these attackers is the capture of one-time passwords, which are essential for logging into accounts that require two-factor authentication.

If attackers successfully capture these codes while also knowing the user’s password, they may gain access to various accounts, including email, financial services, or cryptocurrency wallets. The malware’s capability to monitor clipboard activity is particularly concerning, as individuals often copy cryptocurrency wallet addresses before conducting transactions, making this information valuable to criminals.

Another feature of the malicious app allows attackers to route internet requests through the user’s browser, making it appear as though online activity originates from the user’s home network. The app can also send notifications that mimic security alerts or system warnings. When users click on these notifications, the app reopens, providing another opportunity to capture sensitive information such as login codes or clipboard data.

In response to inquiries about this phishing campaign, a Google spokesperson confirmed that several built-in security systems are in place to thwart threats like this before they can inflict harm. “We can confirm that Safe Browsing in Chrome warns any user who tries to visit this site,” the spokesperson stated. “Chrome also shows a confirmation dialog whenever anyone attempts to download an APK. Android users are automatically protected against known versions of this malware by Google Play Protect, which is enabled by default on Android devices with Google Play Services.”

Google also indicated that its current monitoring shows no apps containing this malware are available on the Google Play Store. Even if malicious apps are installed from outside official stores, Google asserts that Android devices have an additional layer of protection. Google Play Protect can alert users or block apps known to exhibit malicious behavior, including those installed from third-party sources.

However, it is crucial to recognize that Google Play Protect may not be foolproof. Historically, it has not always been 100% effective in removing all known malware from Android devices. Therefore, experts recommend using robust antivirus software to detect malicious downloads, suspicious browser activity, and phishing attempts before they can cause significant damage. Such software acts as an early warning system, helping to block dangerous apps and websites before they can access your device or data.

To avoid falling victim to a suspicious “security check,” users should adopt a few simple habits to protect their accounts and devices. Google does not request the installation of security tools through pop-ups or unfamiliar websites. If a page claims that an account requires a security check, users should close the tab and navigate directly to Google’s official account page by typing the address manually. This approach prevents attackers from redirecting users to a fraudulent site.

Phishing pages often utilize domains that closely resemble those of legitimate companies. Attackers rely on users clicking quickly without scrutinizing the address bar. If the website address does not belong to an official Google domain, it should not be trusted. Even minor alterations in spelling can indicate a fake site designed to steal information.

If users have installed an app through a website and it opens like a standalone program, they should check their browser’s installed apps or extensions list. Removing any unfamiliar or unrecognized items immediately can prevent further information collection or command execution through the browser.

Researchers warn that the malicious Android app may appear under names such as “Security Check” or “System Service.” If users encounter unfamiliar apps with these names, they should review the permissions requested and remove them if they seem suspicious. Apps requesting extensive permissions, such as SMS access, accessibility features, and microphone control, should always be scrutinized.

Using a password manager can help create and store strong, unique passwords for every online account. If attackers obtain one password, they will not automatically gain access to other accounts. Password managers also help prevent users from entering credentials on fake sites, as they typically refuse to auto-fill on lookalike domains.

Two-factor authentication (2FA) adds an extra layer of security beyond passwords. Although this attack aims to capture SMS verification codes, many services allow the use of authenticator apps instead. These apps generate login codes directly on the user’s device, making it significantly more challenging for attackers to intercept them.

If users suspect they have interacted with a dubious security page, they should closely monitor their accounts in the following days for login alerts, password reset emails, or unfamiliar transactions. Prompt action in response to suspicious activity can help prevent attackers from gaining full control over accounts.

Scammers often gather personal information from data broker sites to craft convincing phishing messages. Utilizing a data removal service can assist in removing personal information from these databases, thereby reducing the amount of data criminals can exploit to impersonate companies or create targeted scams.

As attackers evolve their tactics, they are increasingly relying on convincing security messages to persuade individuals to install malicious tools themselves, rather than exploiting technical flaws. Given the reliance on familiar brands like Google for security decisions, it is essential to enhance safeguards against impersonation sites and improve the regulations surrounding the capabilities of installed web apps.

For more information on cybersecurity and to stay updated on potential threats, visit CyberGuy.com.

Hospital Cyberattacks Raise Concerns Over Patient Safety and Care

Hospital cyberattacks pose significant risks to patient safety, disrupting care and exposing sensitive medical data, as highlighted by security expert Ricardo Amper.

Recent episodes of medical dramas may dramatize the chaos of a hospital cyberattack, but for many healthcare facilities, these scenarios are all too real. In Mississippi, the University of Mississippi Medical Center experienced a ransomware attack that forced clinics statewide to close, canceled elective procedures, and disrupted access to electronic medical records. While emergency care continued, the incident underscored a growing concern: hospital cyberattacks are not merely a technical issue but a serious public safety threat.

According to Ricardo Amper, founder and CEO of Incode Technologies, a digital identity verification and biometric authentication company, hospitals are uniquely vulnerable to cyber threats. “If systems go down, patient care is immediately affected,” he explained. The urgency to restore operations quickly often makes healthcare facilities prime targets for ransomware groups. Amper notes that hospitals house some of the most sensitive data, including medical records, identity information, and insurance details, making them attractive targets for cybercriminals.

Moreover, the interconnected nature of healthcare systems means that vulnerabilities can arise from third-party vendors and service providers. “In healthcare, you’re only as secure as the entire ecosystem around you,” Amper stated. While many people envision hackers breaching firewalls, the reality is shifting. Increasingly, attackers are employing social engineering tactics to exploit human trust rather than technical weaknesses.

Artificial intelligence (AI) has made it easier for criminals to impersonate trusted individuals. They can clone voices, generate convincing emails, or create deepfake videos that appear to come from legitimate sources, such as doctors or IT administrators. “AI doesn’t replace social engineering; it supercharges it,” Amper remarked. This means that an employee might receive what seems to be a legitimate request to reset a password or approve a login, leading to a potential breach with just one click.

In the fast-paced environment of a hospital, speed is essential. Healthcare professionals are often focused on patient care, which can create openings for attackers who rely on deception. “That urgency can make it easier for attackers to exploit trust or distraction,” Amper noted. Additionally, many hospitals operate with legacy systems that have been layered over time, increasing complexity and risk. Amper challenges the notion that cybersecurity is solely an IT issue, emphasizing that it is fundamentally about operational resilience.

When a hospital’s systems are compromised, the fallout can be extensive. Exposed data may include not only credit card numbers but also medical histories, Social Security numbers, insurance information, and contact details. This combination can lead to identity fraud, insurance fraud, and targeted scams. Unlike credit cards, stolen medical identities cannot simply be replaced, making them particularly valuable in criminal markets. The effects of a breach may not be immediate; they can emerge months or even years later.

As identity theft becomes increasingly prevalent, Amper highlights the importance of robust identity verification measures. “Identity has become the front line of cybersecurity,” he stated. If an attacker can successfully impersonate a trusted user, many traditional defenses can be bypassed. Hospitals must implement stronger identity verification, layered authentication, and systems capable of detecting impersonation or deepfakes to safeguard against these threats.

For patients concerned about the security of their data following a breach, there are steps they can take. One proactive measure is to check if their email address appears in known data breaches by visiting haveibeenpwned.com. If an email is found in a breach, it is crucial to act quickly by changing passwords for affected accounts and ensuring that each account uses a unique password.

Receiving a breach notification letter can be alarming, but Amper advises patients to remain calm and take it seriously. “Read the notice carefully and enroll in any credit or identity monitoring services offered,” he suggests. If something feels off, patients should contact the hospital directly using official contact information rather than relying on links or numbers provided in unexpected messages. He emphasizes the importance of treating medical identity with the same seriousness as financial identity, urging individuals to monitor their records and remain vigilant.

The consequences of hospital cyberattacks extend beyond stolen records; they affect entire communities. Appointments are canceled, surgeries are delayed, and families are left in uncertainty. This situation raises an uncomfortable question: if your local hospital were to go offline tomorrow, would you trust that your medical identity and care are adequately protected?

As technology continues to transform healthcare, the challenge lies in building resilience into every layer of care. The next cyberattack will not feel like a scripted drama; it will have real-world implications for patient safety and trust in the healthcare system. Taking proactive measures today can help prevent long-term identity damage in the future.

For more insights on cybersecurity and protecting personal information, visit CyberGuy.com.

Wall-Climbing Robots Assist US Navy Warships, Fox News Reports

Wall-climbing robots are now crawling on U.S. Navy warships, marking a significant advancement in naval technology amid rising tensions with China.

The Fox News AI Newsletter provides insights into the latest advancements in artificial intelligence technology, highlighting both the challenges and opportunities that AI presents in various sectors.

In a recent report, Fox News Digital showcased a groundbreaking development in naval technology: wall-climbing robot swarms that are now being deployed on U.S. Navy warships. This innovation comes at a pivotal moment as the U.S. faces an expanding naval fleet from China, which is rapidly increasing in size and capability.

In addition to military advancements, the economic implications of artificial intelligence are also a topic of discussion. An opinion piece in the newsletter argues that the costs associated with AI development, including its extensive energy requirements, will ultimately be passed down to consumers. This raises concerns about who will bear the financial burden of the growing AI industry.

On the corporate front, Dell Technologies has announced a 10% reduction in its workforce for the third consecutive year. This trend reflects ongoing shifts in economic conditions and corporate restructuring within the technology sector, as reported by Fox Business.

In the realm of aviation, Merlin CEO Matt George discussed advancements in AI pilot technology, which aims to enable military and commercial aircraft to operate fully autonomously. This development was highlighted during an appearance on Fox Business’ ‘The Claman Countdown.’

The impact of AI extends beyond military and corporate applications. Homebuyers and sellers are increasingly turning to AI chatbots for guidance in real estate transactions. Experts Lou Basenese and Kirsten Jordan shared insights on this trend during a segment on ‘Fox Business In Depth.’

Fox Business host Charles Payne also addressed the broader economic implications of AI, emphasizing that disruption is already occurring across various industries. His commentary on ‘Making Money’ reflects the growing recognition of AI’s transformative potential.

Entrepreneurship is another area where AI is making waves. Angie Hicks, co-founder of Angi, discussed her journey in building a home services giant and the role AI plays in her business strategy during an interview on ‘Mornings with Maria.’

For those interested in staying updated on the latest developments in AI technology, the Fox News AI Newsletter offers a comprehensive overview of the challenges and opportunities that lie ahead.

According to Fox News, these advancements in AI and technology are reshaping industries and influencing economic dynamics in significant ways.

Purdue Researcher Develops 3D Detection System for Self-Driving Vehicles

Purdue University’s Somali Chaterji has developed AGILE3D, a groundbreaking 3D detection system that enhances real-time perception for self-driving vehicles and other autonomous technologies.

A team at Purdue University, led by Indian American researcher Somali Chaterji, has unveiled a revolutionary 3D detection system that could significantly impact the manufacturing of autonomous vehicles, industrial robotics, delivery robots, and drones. This innovative system, known as AGILE3D, is currently patent-pending and is designed to outperform traditional 3D lidar perception pipelines, particularly during resource contention.

“AGILE3D is the first adaptive, contention- and content-aware 3D object detection system specifically tailored for embedded GPUs, or graphics processing units,” explained Chaterji, who serves as an associate professor of agricultural and biological engineering in Purdue’s College of Agriculture and College of Engineering. She also holds a courtesy appointment in the Elmore Family School of Electrical and Computer Engineering.

The AGILE3D system can dynamically adjust its detection strategies based on real-time hardware constraints and varying input data. This adaptability is crucial for applications that require rapid 3D perception while operating within the limited computational resources of onboard systems.

Research findings presented at prestigious conferences, including the Conference on Neural Information Processing Systems (NeurIPS), the European Conference on Computer Systems (EuroSys), and the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), indicate that AGILE3D meets stringent latency objectives. It delivers an accuracy improvement of over 3% compared to adaptive controllers and up to 7% over commonly used static 3D detectors.

Chaterji emphasized the broad applicability of AGILE3D, stating that it is particularly well-suited for autonomous driving, where real-time processing of lidar frames is essential for safety. “Beyond cars, AGILE3D can enhance the performance of delivery robots, drones, industrial and mobile robotics, as well as augmented reality and virtual reality applications,” she noted. “This is especially important in fields like digital agriculture and forestry, where platforms rely on embedded GPUs and require predictable latency for smoother and safer operations.”

As multiple onboard workloads—such as perception, tracking, planning, and in-cabin infotainment—compete for GPU resources, maintaining performance becomes increasingly challenging. Chaterji explained that resource contention arises when these various processes share the same embedded GPU and memory system simultaneously. An example of this is a ride-hailing robotaxi, where camera perception, lidar processing, tracking, mapping, and planning must all function concurrently.

One of the primary limitations of 3D lidar technology is its update rate, which dictates how frequently the sensor can provide a new point cloud frame, essentially a fresh 3D snapshot of the surrounding environment. AGILE3D addresses this challenge by employing two coordinated layers: a multibranch execution framework (MEF) and a contention- and content-aware reinforcement learning (CARL) controller. These components work together to maintain high accuracy even under varying levels of hardware contention and latency budgets ranging from 100 to 500 milliseconds.

Chaterji and her team are continuing to develop AGILE3D to facilitate dense scene understanding on onboard computers, ensuring that 3D semantic segmentation can operate reliably within tight compute and memory constraints. Funding for this project has been provided through Chaterji’s National Science Foundation CAREER grant, as well as a separate NSF grant for their CHORUS center.

Chaterji holds a PhD in Biomedical Engineering from Purdue University, where she has received several accolades, including the Chorafas International Award and the College of Engineering Best Dissertation Award in 2010. She completed her post-doctoral fellowship at the University of Texas at Austin in the Department of Biomedical Engineering and has been a scientific advisor to the IC2 Institute at the University of Texas at Austin since 2014. In 2016, she was honored with Purdue’s Seed-for-Success Award for securing a research grant exceeding $1 million.

The development of AGILE3D marks a significant advancement in the field of autonomous technology, promising to enhance the safety and efficiency of various applications reliant on real-time 3D perception.

According to a media release from Purdue University, the AGILE3D system represents a pivotal step forward in the integration of advanced perception capabilities into autonomous systems.

NASA Finalizes Strategy for Sustaining Human Presence in Space

NASA has finalized its strategy for maintaining a human presence in space, focusing on the transition from the International Space Station to future commercial platforms.

This week, NASA announced the finalization of its strategy aimed at sustaining a human presence in space, particularly in light of the planned de-orbiting of the International Space Station (ISS) in 2030. The new strategy emphasizes the necessity of maintaining the capability for extended stays in orbit after the ISS is retired.

The document, titled “NASA’s Low Earth Orbit Microgravity Strategy,” outlines the agency’s vision for the next generation of continuous human presence in orbit. It aims to facilitate greater economic growth and uphold international partnerships. However, the strategy comes amid uncertainties regarding the readiness of upcoming commercial space stations.

NASA Deputy Administrator Pam Melroy acknowledged the challenges posed by budget constraints, stating, “Just like everybody has to make hard decisions when the budget is tight, we’ve made some choices over the last year to cut back programs or cancel them altogether to ensure that we’re focused on our highest priorities.”

Commercial space company Voyager is among those working on potential replacements for the ISS. Jeffrey Manber, Voyager’s president of international and space stations, expressed support for NASA’s strategy, emphasizing the need for a commitment to reassure investors. “We need that commitment because we have our investors saying, ‘Is the United States committed?’” he noted.

The initiative to maintain a permanent human presence in space dates back to President Reagan, who highlighted the importance of private partnerships in his 1984 State of the Union address. “America has always been greatest when we dared to be great. We can reach for greatness,” he stated, while also warning that the market for space transportation could exceed the nation’s capacity to develop it.

Since the launch of the first piece of the ISS in 1998, the station has hosted over 28 individuals from 23 countries, maintaining continuous human occupation for 24 years. The Trump administration’s national space policy, released in 2020, called for a “continuous human presence in Earth orbit” and emphasized the transition to commercial platforms—a policy that has been upheld by the Biden administration.

NASA Administrator Bill Nelson addressed the potential for extending the ISS’s operational life, stating, “Let’s say we didn’t have commercial stations that are ready to go. Technically, we could keep the space station going, but the idea was to fly it through 2030 and de-orbit it in 2031.”

Recent discussions have raised questions about the meaning of “continuous human presence.” Melroy remarked at the International Astronautical Congress in October that there is still ongoing dialogue about whether this presence constitutes a “continuous heartbeat” or merely a “continuous capability.” She emphasized the importance of understanding this concept, especially in light of concerns from commercial and international partners regarding the potential loss of the ISS without a commercial station ready to take its place.

<p”Almost all of our industry partners agreed. Continuous presence is continuous heartbeat. And so that’s where we stand,” Melroy stated. She further underscored the United States’ leadership in human spaceflight, noting that the only other space station in orbit when the ISS de-orbits will be the Chinese space station. “We want to stay and remain the partner of choice for our industry and for our goals for NASA,” she added.

Three companies, including Voyager, are collaborating with NASA to develop commercial space stations. Axiom signed an agreement with NASA in 2020, while contracts were awarded to Nanoracks, now part of Voyager Space, and Blue Origin in 2021.

Melroy acknowledged the challenges posed by budget caps resulting from agreements between the White House and Congress for fiscal years 2024 and 2025, which have limited investment opportunities. “What we do is we co-invest with our commercial partners to do the development. I think we’re still able to make it happen before the end of 2030, though, to get a commercial space station up and running so that we have a continuous heartbeat of American astronauts on orbit,” she said.

Voyager remains optimistic about its development timeline, with plans to launch its starship space station in 2028. Manber stated, “We’re not asking for more money. We’re going ahead. We’re ready to replace the International Space Station.” He emphasized the importance of maintaining a permanent presence in space, warning that losing it would disrupt the supply chain that supports the burgeoning space economy.

Additional funding has been allocated to the three companies since the initial space station contracts, and a second round of funding could prove crucial for some projects. NASA may also consider funding new space station proposals, such as Long Beach, California’s Vast Space, which recently unveiled concepts for its Haven modules and plans to launch the Haven-1 as soon as next year.

Melroy concluded by stressing the importance of competition in the development of commercial space stations. “This is a development project. It’s challenging. It was hard to build the space station. We’re asking our commercial partners to step up and do this themselves with some help from us. We think it’s really important that we carry as many options going forward to see which one really pans out when we actually get there,” she said.

According to Fox News, NASA’s finalized strategy reflects a commitment to maintaining a human presence in space, while navigating the complexities of budget constraints and commercial partnerships.

X Service Outage Affects Thousands of Users Across the U.S.

Social media platform X experienced a significant outage on March 18, impacting thousands of users across the U.S. before service was restored later in the day.

On March 18, the social media platform X faced a considerable outage that affected thousands of users throughout the United States. According to data from the outage-tracking website Downdetector, the service was restored later in the day.

The disruption was most noticeable during the morning hours, with approximately 34,500 users reporting issues before the situation improved. By 11:39 a.m. Eastern Time, the number of outage reports had decreased to 849 on Downdetector. This website aggregates incident reports from various sources, including user submissions, suggesting that the actual number of affected users may be higher than the reported figures.

Users encountered difficulties accessing essential features of the platform, such as loading posts, refreshing feeds, and receiving notifications. The outage impacted both the mobile and web versions of X, disrupting real-time communication for many users around the globe.

The cause of the outage remains unclear, and X, which is operated by Elon Musk, did not respond to requests for comment from users seeking clarification.

This recent incident underscores the platform’s vulnerability to technical disruptions. X serves as a significant channel for news dissemination, public discourse, and business communication. The rapid increase in outage reports indicates that the problem escalated quickly, following a period of normal activity on the platform.

Such disruptions are not uncommon. X has experienced multiple outages in recent months, affecting users not only in the United States but also in other regions worldwide. These recurring issues raise concerns about the stability of the platform’s infrastructure and its capacity to manage large-scale user demand without interruptions.

Ultimately, this outage serves as a reminder of the crucial role digital platforms play in modern communication and the inherent risks associated with their occasional instability.

According to Downdetector, the service was restored later in the day, but the incident highlights ongoing concerns about the reliability of social media platforms.

Robot Firefighters Deployed to Enter Burning Buildings First

New robotic firefighting vehicles equipped with thermal cameras and water cannons are transforming emergency response by entering burning buildings before human firefighters.

Firefighters often confront significant challenges when responding to major blazes, primarily due to the uncertainty of what lies within a burning structure. Smoke obscures visibility, floors may be unstable, and toxic gases can accumulate rapidly. Even seasoned crews can find themselves entering buildings with limited information about the hazards they may face.

However, a new generation of robotic firefighting vehicles is poised to change this dynamic. These rugged robots can enter dangerous environments first, scanning the scene to locate fires and assess hazards before human firefighters step inside. By providing real-time information, these machines enable crews to make informed decisions, enhancing safety and effectiveness during firefighting operations.

The robotic firefighter is specifically designed for conditions where heat, smoke, and collapsing structures pose significant risks to human responders. Equipped with a powerful water cannon, the vehicle can adjust its output to deliver either a focused stream or a wide spray, depending on the situation. Additionally, thermal cameras allow the robot to see through thick smoke, providing critical visibility in chaotic environments.

One of the standout features of this robotic vehicle is its self-cooling system. The robot can spray a protective curtain of water around itself, preventing overheating even in extreme temperatures that can reach nearly 1,500 degrees Fahrenheit. In such conditions, human firefighters would be unable to operate safely.

Fire scenes are often unpredictable, with debris blocking pathways and visibility rapidly diminishing. To navigate these challenges, the robot is equipped with six independently powered wheels, each with its own motor. This design allows the vehicle to rotate in place and maneuver through tight spaces effectively. It can also climb steep ramps, such as those found in parking garages, and roll over obstacles up to a foot tall. An advanced driving system scans the terrain, guiding the robot around hazards while streaming live video back to firefighters outside the building.

This real-time video feed is invaluable, as it allows crews to see where flames are spreading and where potential survivors may be trapped. Such insights help firefighters formulate a strategic plan before entering the building, significantly enhancing their safety and effectiveness.

Another practical feature of the robotic firefighter addresses a common challenge faced by firefighters during rescues. The robot carries a hose that glows in dark, smoky environments, providing a visible path for rescuers. This glowing hose can be a lifesaver, helping firefighters navigate back to safety when visibility is nearly nonexistent.

The emergence of firefighting robots is part of a broader trend in emergency response, where machines are increasingly taking on tasks that place human lives at risk. Similar technologies are already in use across various fields, including autonomous mining trucks in remote locations and robots that clear landmines in former war zones. The underlying principle is straightforward: allow machines to handle the most dangerous initial moments of a crisis while human responders focus on rescue and strategy.

Engineers are also exploring the potential of artificial intelligence to enhance these robotic systems further. Future iterations may analyze fire size, smoke patterns, and heat levels to assist in firefighting decisions, making these robots even more effective in crisis situations.

The robotic firefighter was developed by Hyundai Motor Group in collaboration with South Korea’s National Fire Agency. Recently, the company donated several of these vehicles to fire stations in South Korea, allowing crews to begin utilizing them in real emergencies. Two robots have already been delivered, with additional units expected soon.

The technology has already undergone its first real-world test during a factory fire in North Chungcheong Province. The push for safer firefighting tools is underscored by alarming statistics; according to the Korea National Fire Agency, 1,788 firefighters have been injured or killed on the job over the past decade. By enabling robots to enter hazardous environments first, the hope is to reduce these numbers significantly.

While most people may not yet see these machines in their neighborhoods, the rapid adoption of firefighting technology suggests that their presence could become more common as departments recognize the benefits. U.S. fire agencies are already employing drones, thermal cameras, and robotics in various rescue scenarios. A robot that can scout a burning building before firefighters enter could soon become an essential tool in their arsenal, providing better information and reducing the risks associated with blind entries into dangerous structures.

For firefighters, this technology offers a critical advantage: enhanced situational awareness when every second counts. Although robots will never replace the human element in firefighting, they can provide invaluable support, ensuring that responders have the best possible information before they commit to entering a burning building.

As the technology continues to evolve, it raises an important question for communities: If your local fire department had access to a robot capable of entering a burning building first, would you support its use? This innovative approach to firefighting could lead to faster rescues and safer emergency responses in the future, ultimately benefiting everyone.

According to Fox News, the integration of robotic technology in firefighting represents a significant advancement in emergency response capabilities.

Orbiter Photos Reveal Lunar Modules from First Two Moon Landings

Recent aerial images from India’s Chandrayaan 2 orbiter reveal the Apollo 11 and Apollo 12 lunar landing modules more than 50 years after their historic missions.

Photos captured by India’s Space Research Organization (ISRO) moon orbiter, Chandrayaan 2, provide a stunning view of the Apollo 11 and Apollo 12 landing sites over half a century after these historic missions. The images, taken in April 2021, were recently shared on the Curiosity page on X, a platform dedicated to space exploration.

“Image of Apollo 11 and 12 taken by India’s Moon orbiter. Disapproving Moon landing deniers,” Curiosity posted, accompanied by the overhead photographs that clearly depict the lunar landing vehicles resting on the moon’s surface.

Apollo 11, which made its historic landing on July 20, 1969, marked a monumental achievement in human space exploration, with astronauts Neil Armstrong and Buzz Aldrin becoming the first men to walk on the lunar surface. Their fellow astronaut, Michael Collins, remained in lunar orbit during their historic excursion.

The lunar module, known as Eagle, was left in lunar orbit after it successfully rendezvoused with the command module, where Collins was stationed. The Eagle eventually returned to the moon’s surface after completing its mission.

Following Apollo 11, Apollo 12 became NASA’s second crewed mission to land on the moon, occurring on November 19, 1969. During this mission, astronauts Charles “Pete” Conrad and Alan Bean made history as the third and fourth men to walk on the lunar surface.

The Apollo program continued until December 1972, culminating in the final mission when astronaut Eugene Cernan became the last person to walk on the moon.

The Chandrayaan-2 mission was launched on July 22, 2019, precisely 50 years after the Apollo 11 mission, and it took two years for the orbiter to capture these remarkable images of the 1969 lunar landers.

In addition to Chandrayaan-2, India also launched Chandrayaan-3 last year, which successfully landed near the moon’s south pole, marking another significant achievement in lunar exploration.

These recent images serve not only as a testament to the enduring legacy of the Apollo missions but also highlight the ongoing advancements in space exploration technology, as nations around the world continue to explore the mysteries of the moon and beyond.

According to Fox News, the images from Chandrayaan 2 reaffirm the historical significance of the Apollo landings and contribute to the ongoing dialogue about space exploration and its impact on humanity.

Indian-American Researchers Develop Tool to Prevent Identity Leaks in AI Photo Editing

Three Indian American researchers from Purdue University have developed a groundbreaking system to safeguard personal identities during AI photo editing by limiting the detection of key attributes.

Three Indian American researchers at Purdue University have created a patent-pending system designed to protect against identity leakage during AI photo editing. This innovative tool reduces the ability of artificial intelligence to detect sensitive attributes such as eye color and facial hair.

The system, developed by Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty, is utilized before and after photos are uploaded to an AI editing platform. According to a media release from the West Lafayette, Indiana-based public research university, this technology aims to assist consumers, businesses, and institutions in editing and sharing profile photos, ID images, and personal pictures without compromising their private identities.

“Results of validation testing show that we can preserve editing quality while dramatically reducing what AI models can learn about your identity,” Aggarwal stated. “This is a critical step toward trustworthy generative AI.” Their research has been published in the peer-reviewed journal IEEE Transactions on Artificial Intelligence.

Aggarwal holds the title of University Faculty Scholar and serves as the Reilly Professor of Industrial Engineering, with additional appointments in the Department of Computer Science and the Elmore Family School of Electrical and Computer Engineering. Both Tamboli, a doctoral alumnus, and Punyamoorty, a doctoral candidate in computer and electrical engineering, have worked in Aggarwal’s research group.

“Our system allows users to mask sensitive regions on their photo, like the face, from an AI editing service,” Tamboli explained. “Those regions are masked locally on the user’s device using a detailed outline of the region.” He added that only the masked image is sent to the AI editing service. “After the image is edited by AI, our system reintegrates the sensitive region back into the edited image using geometric alignment and blending,” he noted.

Aggarwal emphasized that the Purdue system is the first solution to provide full privacy, as sensitive data never leaves the user’s device. This approach not only produces seamless, natural results in the final edited image but is also compatible with any commercial generative AI model, eliminating the need for retraining.

“It’s privacy by design,” Aggarwal said. “With our system, the AI platform never sees the face, but the final edited image still looks completely natural.” The researchers have disclosed their system to the Purdue Innovates Office of Technology Commercialization, which has applied for a patent to protect the intellectual property.

Addressing the privacy risks associated with AI editing tools, Tamboli noted that modern generative AI technologies edit photos with impressive realism but require users to upload full, unaltered images to cloud-based systems. These images often contain private details, including facial features and identifying characteristics.

“Requiring full, unaltered images creates serious privacy and security risks,” he said. “Once a photo is uploaded, users lose control over where their biometric data goes, how it is stored, or how it might be misused.” Tamboli criticized previous privacy approaches that relied on blurring sensitive regions, locking parts of an image, using stylization filters, or avoiding cloud uploads entirely, stating that these methods fail to fully protect personal identity.

The research team validated their system by testing how well leading AI foundation models could infer biometric attributes from masked versus unmasked images. They discovered that the Purdue system significantly reduced the ability of AI models to detect attributes such as eye color, facial hair, and age group. In some instances, the accuracy of attribute classification dropped by more than 80%, demonstrating robust protection against identity leakage.

The researchers are actively working to bring this technology closer to real-world deployment, with plans to expand the system’s capabilities to protect additional sensitive features, including medical details, ID documents, and other privacy-critical content.

This innovative development highlights the ongoing efforts of researchers to address privacy concerns in the rapidly evolving landscape of AI technology, ensuring that personal identities remain secure in the digital age.

According to The American Bazaar, the Purdue Innovates Office of Technology Commercialization is committed to advancing this technology for broader application.

The Email Technique That Uncovers Hidden Online Accounts

Searching your email inbox for old sign-up messages can help you uncover forgotten online accounts and reduce your digital footprint.

In today’s digital landscape, many individuals find themselves with a multitude of online accounts, often far more than they can remember. From shopping sites and travel apps to rewards programs and forums, the ease of signing up for services can lead to a cluttered digital existence.

These forgotten accounts can pose risks, as they contribute to a larger digital footprint and may expose personal information if a company experiences a data breach. Fortunately, there is a straightforward method to uncover these accounts using a tool that most people already have at their disposal: their email inbox.

When you create an account on a website, it typically sends a confirmation email. This means your inbox serves as a timeline of every service you have joined. Instead of racking your brain to remember all the sites you signed up for, you can simply search your email for clues.

To begin, open your email account and utilize the search bar. Enter phrases commonly found in sign-up emails, such as “welcome,” “confirm your account,” or “thank you for registering.” These keywords often yield a treasure trove of account confirmations, revealing services you may have forgotten about.

As you sift through the results, take note of the companies sending these messages. Many users are surprised to discover accounts they haven’t thought about in years. It’s not uncommon for the list to grow quickly once you start searching.

After identifying these accounts, compile a short list of those you no longer use. Even a brief search can uncover a surprising number of accounts, effectively creating a cleanup checklist for you.

Once you have your list, visit the official website of each service directly—avoid clicking on links in old emails for security reasons. Look for account settings or options to delete your account. If you cannot find the option to remove your account, consider reaching out to the company’s support team for assistance.

While it may take some time, deleting unused accounts significantly reduces the number of platforms storing your personal information. This proactive approach is essential for maintaining your online privacy.

In addition to the initial search, consider conducting another round using phrases like “unsubscribe” or “account settings.” These terms often indicate that you have created an account with the respective company. Many users are astonished by the number of services that appear during this search.

Closing old accounts not only helps mitigate risks but also reduces the chances of your personal information being compromised. However, it’s important to note that your data might still exist elsewhere on the internet. Data broker companies frequently collect personal details from various sources, including apps, websites, and public records. They create profiles that may include your address, phone number, browsing habits, and more.

After removing unused accounts, many individuals opt to use data removal services that request the deletion of their listings from these data brokers. This combination can significantly decrease the amount of personal information available online.

For those interested in exploring data removal services, resources are available to help you assess whether your personal information is already exposed on the web. A quick scan can provide insights into your online presence and help you take necessary precautions.

Digital clutter accumulates quietly over time, with each sign-up adding another account linked to your email address. The good news is that your inbox holds the key to uncovering many of these forgotten accounts. A few simple searches can reveal long-dormant accounts that have been lingering online for years.

Cleaning up these accounts requires some effort, but the benefits are substantial. Fewer accounts mean fewer places where your personal information can leak or be exposed. It’s worth considering how many companies may still possess your personal information without your knowledge.

For more tips on managing your online security and privacy, consider subscribing to newsletters that offer insights and alerts on urgent security matters.

According to CyberGuy.com, taking proactive steps to manage your online accounts can significantly enhance your digital security.

Newly Discovered Asteroid Identified as Tesla Roadster in Space

Astronomers recently identified a Tesla Roadster, launched into orbit by SpaceX in 2018, as an asteroid, leading to a swift retraction of the discovery.

Astronomers from the Minor Planet Center at the Harvard-Smithsonian Center for Astrophysics in Massachusetts mistakenly classified a Tesla Roadster, launched into orbit by SpaceX in 2018, as an asteroid earlier this month. This confusion arose shortly after the object was registered as 2018 CN41, which was promptly deleted on January 3 when it was confirmed to be Musk’s roadster.

The Minor Planet Center clarified on its website that the registry for 2018 CN41 was removed after it was determined that the orbit of the object matched that of an artificial satellite, specifically the Falcon Heavy Upper Stage carrying the Tesla Roadster. The center stated, “The designation 2018 CN41 is being deleted and will be listed as omitted.”

The Tesla Roadster was launched during the maiden flight of SpaceX’s Falcon Heavy rocket in February 2018. Initially, the vehicle was expected to enter an elliptical orbit around the sun, extending slightly beyond Mars before returning toward Earth. However, it appears to have exceeded the orbit of Mars and continued its trajectory toward the asteroid belt, as noted by Musk at the time.

When the roadster was misidentified as an asteroid, it was located less than 150,000 miles from Earth, which is closer than the moon’s orbit. This proximity raised concerns among astronomers about the need to monitor the object’s path and its potential closeness to Earth.

Jonathan McDowell, an astrophysicist at the Center for Astrophysics, commented on the incident, highlighting the challenges posed by untracked objects in space. He remarked, “Worst case, you spend a billion launching a space probe to study an asteroid and only realize it’s not an asteroid when you get there,” emphasizing the importance of accurate tracking and identification of celestial bodies.

As the situation unfolded, Fox News Digital reached out to SpaceX for further comment regarding the misidentification of the Tesla Roadster.

This incident serves as a reminder of the complexities involved in space exploration and the ongoing need for precise monitoring of objects in orbit, whether they are natural or man-made.

According to Astronomy Magazine, the mix-up underscores the challenges faced by astronomers in distinguishing between asteroids and artificial objects, particularly as the number of satellites and other debris in space continues to grow.

CarGurus Data Breach Exposes 12.4 Million Records Linked to ShinyHunters

CarGurus users are at risk after the ShinyHunters hacking group leaked 12.4 million records, including sensitive personal and financial information.

CarGurus users are facing significant security risks following a data breach linked to the ShinyHunters hacking group, which has allegedly leaked 12.4 million records. This incident raises concerns about the safety of personal information for millions of individuals who utilize the popular auto shopping platform each month.

The leaked data reportedly includes a variety of sensitive information, such as names, phone numbers, email addresses, physical addresses, and finance pre-qualification details. While a majority of the records had been exposed in previous incidents, approximately 3.7 million records are newly added, making this data particularly concerning for users.

The ShinyHunters group published a 6.1GB file on February 21, claiming it contained user records from CarGurus, which operates not only in the United States but also in Canada and the United Kingdom. The platform attracts around 40 million visitors monthly, allowing users to compare vehicles, contact sellers, and apply for financing.

According to Have I Been Pwned, a website that tracks data breaches, the exposed information encompasses email addresses, IP addresses, full names, phone numbers, physical addresses, account IDs, dealer details, subscription information, and finance pre-qualification application data, along with their outcomes. Notably, about 70% of the data had previously appeared in other breaches, while the remaining 3.7 million records are new.

As of now, CarGurus has not issued an official statement confirming the breach and has not responded to media inquiries regarding the incident. ShinyHunters is notorious for leaking company data when ransom negotiations fail and has recently targeted major brands across various sectors, including telecom, retail, finance, and technology.

The group typically gains access to sensitive data through social engineering tactics rather than directly breaching firewalls. In past incidents, they have used phone calls or fake login pages to trick employees into providing credentials. Once inside, attackers can quietly access cloud systems that house customer data. In some cases, they have even convinced employees to install malicious applications that grant access to customer databases without triggering alarms.

If the dataset is legitimate, criminals now have access to detailed personal profiles linked to car shopping and financing activities, which can be highly valuable. The finance pre-qualification data is particularly sensitive, as it indicates that individuals were sharing financial details, making them prime targets for scams, identity theft attempts, and fraudulent loan offers.

A spokesperson for CarGurus acknowledged a cybersecurity incident, stating, “We promptly responded by securing the affected environment, and we are currently working with a leading cybersecurity firm to investigate. Based on the investigation to date, we believe the activity has been contained and limited in scope. Also, at this time, there are no indications that dealer data feeds, APIs, or core systems or products used by our consumers or dealer partners have been compromised. We remain fully operational, and our services continue without interruption. We will notify any affected individuals in accordance with applicable laws.”

In light of this breach, users are advised to take immediate steps to mitigate their risk. One recommended action is to check if your email address has been affected by visiting Have I Been Pwned. Users can enter their email address to determine if their information appears in the CarGurus leak.

It is also essential to secure important accounts, such as email, medical, and banking, by using strong, unique passwords that combine letters, numbers, and symbols. Avoid predictable choices like names or birthdays, and never reuse passwords across multiple accounts. A password manager can simplify this process by securely storing complex passwords and generating new ones as needed.

Additionally, consider utilizing a personal data removal service. While no service can guarantee complete removal of personal data from the internet, these services actively monitor and erase personal information from various websites, reducing the risk of scammers cross-referencing data from breaches with information available on the dark web.

If CarGurus or your email provider offers two-factor authentication (2FA), enabling it adds an extra layer of security, making it more challenging for unauthorized individuals to access your accounts even if they have your password.

Users should exercise caution with emails or texts related to car loans, financing approvals, or dealership follow-ups. It is advisable not to click on links in unsolicited messages and instead contact the company directly using official contact details found on their website. Strong antivirus software can also help block malicious links and downloads that may accompany phishing campaigns.

For those who applied for financing, monitoring credit reports for unfamiliar inquiries or new accounts is crucial. Early detection can help prevent identity theft from escalating. If suspicious activity is detected, consider placing a credit freeze to safeguard personal information.

Identity theft protection services can also monitor unusual activity linked to your name, Social Security number, or financial accounts, alerting you promptly if someone attempts to open a new credit card in your name.

This incident underscores a broader issue concerning the security of personal and financial data collected by companies. If the leaked dataset is authentic, millions of individuals who were simply shopping for a car now face an increased risk of scams. CarGurus has yet to publicly confirm the breach, leaving customers in a state of uncertainty regarding the potential exposure of their sensitive financial application data.

As discussions around data security continue, the question arises: should companies that collect financing data be required to publicly confirm or deny breaches within a specific timeframe? This incident highlights the need for transparency in the handling of sensitive information.

For further information and tips on protecting your data, visit CyberGuy.

Indian-American IIT Graduate Devendra Chaplot to Assist Musk in Superintelligence Development

Indian American AI researcher Devendra Chaplot has joined Elon Musk’s xAI and SpaceX to collaborate on developing advanced artificial intelligence systems, aiming to create what he calls “superintelligence.”

Devendra Singh Chaplot, an Indian American AI researcher, has joined Elon Musk’s xAI and SpaceX, where he is working closely with Musk and his teams to develop what he describes as “superintelligence.”

A graduate of the Indian Institute of Technology (IIT) Bombay, Chaplot is set to collaborate intimately with the teams at SpaceX and xAI on advanced artificial intelligence systems. He believes that the partnership between these two companies presents a unique opportunity to merge physical and digital intelligence.

Chaplot emphasizes that the high engineering culture and substantial resources available at both SpaceX and xAI could facilitate significant breakthroughs in the creation of advanced AI technologies. He expressed his enthusiasm on social media, stating, “Together SpaceX and xAI combine physical and digital intelligence under a leader who understands hardware at the deepest level. Add a high-agency culture with frontier-scale resources, and you get the possibility to achieve something truly unique.”

In his announcement, Chaplot reflected on his journey in the field of artificial intelligence, saying, “I’m excited to advance the fields I’ve obsessed over for years, from robotics research to building AI models on the founding teams of Mistral and TML. Both were extraordinary journeys with extraordinary people that shaped how I think about building intelligence from the ground up.”

Chaplot expressed gratitude for the experiences that led him to this point, adding, “Grateful for everything that brought me here and can’t wait to get started.”

He holds a Bachelor of Technology (BTech) degree in Computer Science and Engineering, along with a minor in Applied Statistics from IIT Bombay. Chaplot later earned a PhD in machine learning from Carnegie Mellon University, a renowned institution in the field of artificial intelligence, where he focused on building intelligent autonomous navigation agents.

Throughout his career, Chaplot has worked at the intersection of machine learning, robotics, and computer vision. His contributions include the development of smart systems capable of perceiving and interacting with their environments.

Prior to joining xAI and SpaceX, Chaplot was part of the founding team at Thinking Machines Lab, where he worked on research and product development, including the creation of Tinker, a training API that enables users to train large language models (LLMs).

Before that, he was a founding member of Mistral AI, where he contributed to the training of several models, including Mistral 7B, Mixtral 8x7B, and Mistral Large. He also led the multimodal research team responsible for training Pixtral 12B and Pixtral Large, and established the Mistral U.S. office in Palo Alto.

Earlier in his career, Chaplot served as a research scientist at Facebook AI Research, where he focused on the convergence of computer vision and robotics.

As Chaplot embarks on this new chapter with Musk’s teams, the AI community is keenly watching for the innovations that may emerge from this collaboration, which aims to push the boundaries of artificial intelligence.

According to The American Bazaar, Chaplot’s expertise and experience position him as a significant contributor to the ambitious goals of xAI and SpaceX.

Data Brokers Allegedly Conceal Opt-Out Pages from Google Users

Major data brokers have been accused of obscuring opt-out pages from search engines, complicating consumers’ efforts to stop the sale of their personal information, according to a recent Senate investigation.

A recent investigation by the U.S. Senate has revealed that several prominent data brokers allegedly concealed their opt-out pages from search engines, making it increasingly difficult for consumers to prevent the sale of their personal information.

For anyone who has attempted to opt out of a data broker’s services, the experience can be frustrating. Users often find themselves navigating through layers of legal jargon and complex web pages, leading to the unsettling question: Do these companies even want you to find the exit? The Senate’s findings suggest that the answer is a resounding no.

The investigation uncovered that major data brokers implemented coding on their opt-out pages that effectively blocked search engines from indexing them. This means that consumers could not easily locate the pages necessary to request the cessation of their data sales.

Following pressure from Senator Maggie Hassan, four companies have since removed the obstructive code from their sites. The firms implicated in the report are known for collecting and selling personal information for various purposes, including marketing, analytics, and identity verification. The types of data they handle can range from browsing habits and device details to location history and sensitive identifiers.

Earlier investigations conducted by The Markup and CalMatters had already indicated that numerous data brokers employed “no index” code to obscure opt-out instructions from Google search results. While some companies removed the code after being contacted by reporters, Senator Hassan’s office later confirmed that the four companies in question still had their opt-out pages hidden from search engines. They have now taken steps to rectify this issue.

However, one company, Findem, has not yet removed the “no index” code from its “Do not sell or share my personal information” page. In response, Findem stated that an email from the senator’s office did not reach its CEO due to spam filtering, but assured that its privacy channels are actively monitored. The Senate Committee’s report highlighted this lack of action as a significant concern regarding the responsiveness to privacy requests and the accessibility of opt-out rights.

In a statement, a spokesperson for 6sense emphasized their commitment to privacy transparency, noting that their Privacy Center, where individuals can exercise their opt-out rights, has always been fully indexed. They acknowledged that a “no index” directive was previously included on their Privacy Policy page to mitigate spam but confirmed that it was removed immediately after the issue was raised by the Committee.

Opt-out pages are not merely a courtesy; in many states, they are mandated by law. When companies obscure these pages from search engines, they create barriers that hinder consumers from taking control of their personal information. This is particularly concerning given the financial repercussions of data broker breaches, which have cost U.S. consumers over $20 billion due to identity theft linked to four major data broker incidents.

The implications of these breaches extend beyond privacy concerns; they pose significant risks to consumer protection. Criminal networks can exploit personal data such as Social Security numbers and home addresses to craft convincing scams, making the issue of data broker breaches a pressing consumer protection matter.

Senator Hassan’s investigation is part of a broader initiative to combat scams, which now account for nearly half a trillion dollars in losses annually and have evolved into one of the largest illicit industries worldwide. She has also initiated inquiries into the roles of satellite internet providers, online dating platforms, AI companies, and federal agencies in preventing fraud.

The uncomfortable reality is that your personal data likely resides in numerous databases you may not even be aware of. You did not consent to this; your information is traded within a vast marketplace. Even when opt-out forms are available, the process can feel overwhelming and time-consuming. With the absence of a comprehensive federal privacy law similar to the European GDPR, regulations vary significantly from state to state.

While the recent changes have made opt-out pages easier to locate, the overarching system remains largely unchanged. Completely erasing your presence from the internet is not feasible overnight, but there are steps you can take to minimize your exposure.

One effective method is to search your full name and city on Google to identify data broker listings, many of which contain opt-out links hidden within their privacy policies. California residents can utilize a free state-run tool called DROP at privacy.ca.gov/drop/ to request deletion from over 500 registered brokers, with other states beginning to implement similar systems.

Additionally, visiting the privacy or “Do not sell my information” pages on broker sites and carefully following the provided instructions can help you take control of your data. Keeping track of confirmation emails is also crucial.

For those seeking a more automated approach, data removal services can streamline opt-out requests across various brokers. While these services may not be perfect, they can save significant time. You can also explore expert-reviewed password managers and enable two-factor authentication (2FA) for financial and social accounts to enhance your security.

The data broker industry operates legally and transparently, yet many individuals remain unaware of the extent to which their information is traded. Until Congress enacts a national privacy law, oversight will continue to be fragmented, leaving consumers to navigate the complexities of data management on their own.

This situation transcends the issue of hidden code; it is fundamentally about control. When companies obscure opt-out pages from search engines, they create an uneven playing field. Although recent scrutiny has made these pages more accessible, the broader ecosystem remains designed to profit from personal data.

The pressing question is not merely whether opt-out pages are now visible on Google, but rather how much of your personal life you are comfortable entrusting to companies you may never have heard of. For further insights and assistance, visit CyberGuy.com.

Remote Robot Surgery Successfully Treats Cancer 1,500 Miles Away

U.K. surgeons have successfully performed remote robot-assisted surgery to remove prostate cancer from a patient located 1,500 miles away, marking a significant milestone in telesurgery.

Surgeons in the United Kingdom have achieved a groundbreaking milestone in medical technology by successfully conducting remote robot-assisted surgery to remove prostate cancer from a patient located 1,500 miles away. This pioneering operation, carried out at The London Clinic, represents the first instance of robot-assisted telesurgery in the U.K.

Traditionally, patients requiring specialized cancer surgery must travel to see a specialist. In this case, however, the specialist traveled to the patient. The procedure took place at St. Bernard’s Hospital in Gibraltar, where the patient remained in the operating room while Professor Prokar Dasgupta operated the robotic system from a control console at The London Clinic’s robotic center on Harley Street in London.

The advanced surgical robot used for this procedure is the Toumai robotic surgical system, developed by MicroPort MedBot. This platform is specifically designed for high-precision, minimally invasive surgeries. The operation was made possible through a secure fiber optic network that transmitted the surgeon’s movements to the robot in Gibraltar, with a latency of just 48 milliseconds—fast enough to create an almost real-time experience.

During the procedure, local urological surgeons James Allen and Paul Hughes were on standby in Gibraltar, ready to intervene if any complications arose or if the connection was interrupted. Fortunately, the operation proceeded without any issues.

The patient, 62-year-old Paul Buxton, has been a resident of Gibraltar for approximately four decades. He had initially planned to travel to London for his surgery, but was offered the opportunity to participate in a telesurgery trial earlier this year. This innovative approach allowed him to undergo the procedure in his local hospital, significantly reducing the disruption to his life. Reports indicate that he felt fantastic just days after the surgery.

The development of remote robotic surgery has been a long time in the making, with early examples dating back to the Lindbergh Operation, where surgeons in New York performed a gallbladder removal on a patient in Strasbourg, France. Since then, technology has advanced significantly, with cross-continental robotic surgeries being conducted between cities such as Rome and Beijing, as well as long-distance prostate operations in parts of Africa.

The successful procedure at The London Clinic signifies a shift in the landscape of remote robotic surgery, moving from experimental demonstrations to practical medical applications. To further showcase this technology, the hospitals plan to live-stream a telesurgery procedure to thousands of surgeons at the upcoming European Association of Urology Congress.

Several key technologies work in tandem to make remote surgery feasible. Surgeons need to see and react instantly during operations, as even minor delays can complicate precise movements. Modern fiber optic networks, along with backup 5G connections, help maintain extremely low latency. Robotic surgical systems translate a surgeon’s hand movements into smaller, more stable actions inside the patient’s body, which can enhance outcomes in delicate procedures like prostate cancer removal. High-definition 3D cameras provide surgeons with exceptional clarity, often surpassing the visibility offered by traditional open surgery.

Despite these advancements, remote robotic surgery still faces significant challenges. Infrastructure remains a critical issue, as hospitals must ensure that their networks are highly reliable with minimal downtime. The costs associated with robotic surgical systems and specialized networks can also be prohibitive, often running into millions of dollars. Additionally, regulatory concerns arise when surgeons operate across borders, introducing complexities related to legal and licensing requirements.

Every remote procedure necessitates contingency plans, with local surgical teams prepared to step in if technology fails. For now, hospitals view telesurgery as an emerging capability rather than a routine practice.

The long-term implications for patients could be profound. In the future, individuals may not need to travel to major medical centers for complex procedures. Instead, specialists could operate remotely, allowing patients to remain in hospitals closer to home. This evolution could particularly benefit those in rural areas or regions with limited access to specialized care, potentially reducing wait times for certain procedures.

Safety remains the paramount concern in this transition. Hospitals must demonstrate that remote procedures are as reliable as traditional surgeries before the technology can become widespread. The successful connection between London and Gibraltar illustrates the rapid advancements in surgical technology, with reliable networks and sophisticated robots enabling surgeons to guide delicate procedures from thousands of miles away.

While remote surgery may not become commonplace overnight, the trajectory is clear. As technology continues to improve, distance may no longer be a barrier to accessing world-class surgical care.

For further insights on this topic, please refer to Fox News.

Private Lunar Lander Blue Ghost Successfully Lands on Moon for NASA

A private lunar lander, Blue Ghost, successfully landed on the moon on Sunday, delivering equipment for NASA and marking a significant milestone for commercial space exploration.

A private lunar lander carrying equipment for NASA successfully touched down on the moon on Sunday, with Mission Control confirming the landing from Texas. This achievement highlights the growing involvement of private companies in lunar exploration as they prepare for future astronaut missions.

Firefly Aerospace’s Blue Ghost lander made its descent from lunar orbit on autopilot, targeting the slopes of an ancient volcanic dome located in an impact basin on the moon’s northeastern edge. The company’s Mission Control, situated outside Austin, Texas, celebrated the successful landing.

“You all stuck the landing. We’re on the moon,” said Will Coogan, chief engineer for the lander at Firefly Aerospace.

This upright and stable landing makes Firefly the first private company to successfully place a spacecraft on the moon without crashing or tipping over. Historically, only five countries—Russia, the United States, China, India, and Japan—have accomplished this feat, with some government missions having failed in the past.

The Blue Ghost lander, named after a rare species of firefly found in the U.S., stands 6 feet 6 inches tall and spans 11 feet wide, providing enhanced stability during its lunar operations.

Approximately half an hour after landing, Blue Ghost began transmitting images from the lunar surface. The first photo sent back was a selfie, albeit somewhat obscured by the sun’s glare.

In addition to Blue Ghost, two other companies are preparing to launch their lunar landers, with the next mission expected to join Blue Ghost on the moon later this week.

This successful landing marks a significant step forward in the commercial space sector, as private companies continue to explore opportunities on Earth’s natural satellite.

According to The Associated Press, the advancements in lunar exploration by private entities could pave the way for more ambitious missions in the future.

Donny Osmond Utilizes AI Technology to Duet with His Younger Self

Donny Osmond’s Las Vegas residency features a groundbreaking digital duet with his 14-year-old self, showcasing the intersection of nostalgia and modern technology in entertainment.

Donny Osmond has long been a figure of evolution in the entertainment industry, and his latest venture in Las Vegas exemplifies this spirit. During his residency at Harrah’s, the legendary performer engages audiences with a digital duet featuring a virtual version of his 14-year-old self, the same teenage sensation who won hearts with hits like “Puppy Love.” This innovative performance not only captivates but also reflects Osmond’s willingness to embrace technology as a means of reinterpreting his storied career.

Osmond’s ability to connect with multiple generations is a testament to his enduring appeal. Older fans remember him as the teen idol who burst onto the scene, while others know him from his iconic variety show with sister Marie. Theater enthusiasts recall his role in “Joseph and the Amazing Technicolor Dreamcoat,” and younger audiences recognize him as the voice of Captain Shang in Disney’s “Mulan.” Additionally, reality TV fans may remember his appearances on “Dancing With the Stars” and “The Masked Singer.” This diverse portfolio allows Osmond to transcend eras, and he embraces this multifaceted identity rather than shying away from it.

In a recent conversation for the “Beyond Connected” podcast, Osmond shared insights into the technology behind his performance. The concept of singing alongside a digital version of himself has been a long-held dream. “Even when I was a teenager, I thought someday there’s going to be technology where John Wayne could be Obi-Wan Kenobi. And I was right,” he remarked, reflecting on his fascination with the possibilities of future technology.

Osmond’s curiosity led him to ponder, “Why can’t I sing ‘Puppy Love’ with my 14-year-old self on stage?” The answer involved a blend of advanced digital production techniques, AI modeling, and innovative stage design. He explained, “The face is actually my 14-year-old face taken from pictures, the voice is my voice from interviews when I was 14, and the body is my 14-year-old grandson.” This combination creates a stunning illusion where both versions of Osmond appear to share the stage simultaneously.

Contrary to popular belief, the younger Osmond is not a hologram. “It’s not a projection, like a laser projection. It’s not like a hologram. It’s a totally different technology,” he clarified. The illusion relies on a hollow box technology integrated into the stage set, designed to resemble a vintage recording booth. Inside, advanced visual systems merge CGI, AI modeling, and stage lighting to produce a full-size, three-dimensional image of the younger Osmond, animated by his grandson’s movements. This setup allows Osmond to interact with his younger self in real time, creating a captivating experience for the audience.

Even after performing this sequence night after night, Osmond finds the experience exhilarating. “I do it every night, and it never gets old. It’s like looking in the mirror 54 years ago,” he said. For longtime fans, this moment serves as a bridge between the youthful star they once adored and the seasoned performer he has become, illustrating a career that spans generations.

Osmond’s enthusiasm for technology is evident in his approach to his performances. “Ever since I was a teenager, I’ve always been kind of a geek or nerd about technical things,” he admitted. This passion drives him to explore new tools and methods to keep his show fresh and engaging. He even revealed a surprising hobby: “I’d have to say, uh, Google Sheets because, uh, I’ve created algorithms.” His interest in data analysis and technology extends beyond the stage, as he employs smart home systems to monitor his properties and ensure security.

As discussions around artificial intelligence continue to evolve in the entertainment industry, Osmond maintains a balanced perspective. “Any technology put in the wrong hands can turn into nefarious things, but look at the good it can do,” he stated. He believes that AI has the potential to drive significant advancements across various fields, including medicine and entertainment. “What a great time to be alive with today’s technology. It’s amazing to watch it all happen in real time,” he added, emphasizing the importance of staying engaged with technological progress.

Osmond also shared an intriguing anecdote about his music’s reach beyond Earth. He mentioned that one of his songs, “Start Again,” was reportedly used to test the sound system on a spacecraft capsule. “They actually used my song to test the sound system on one of the capsules,” he said, adding that his voice may even be sitting on the moon, as he contributed background vocals to a song that was taken there during the Apollo missions.

Reflecting on how digital platforms might have transformed his early career, Osmond mused, “Can you imagine what I could have done during the ‘Puppy Love’ years with social media?” He noted that the connections fans once sought in person are now often facilitated through social media and digital communities, illustrating how technology has reshaped the entertainment landscape.

Osmond’s career began with his brothers as part of the Osmonds, a family group that became a television sensation in the late 1960s and early 1970s. He later gained fame alongside his sister Marie in their hit variety series “Donny & Marie.” Today, he continues to headline his own residency at Harrah’s Las Vegas, with performances extended through May 2026, reflecting his ongoing popularity.

To keep fans engaged, Osmond has developed the Donny app, which consolidates news, videos, tour updates, and a timeline of his career. Fans can also access tickets and show information through his official website, Donny.com. By blending nostalgia with modern technology, Osmond remains connected to fans across generations while pushing the boundaries of his performances.

Donny Osmond’s journey illustrates how curiosity and adaptability can propel an artist forward. Rather than resisting change, he continues to explore the technologies shaping today’s world, from AI-enhanced performances to data-driven applications and smart home systems. His enthusiasm for innovation mirrors the passion he brings to his craft, making him a unique figure in the entertainment industry. For more insights into his experiences and thoughts on technology, be sure to listen to the “Beyond Connected” conversation with Donny Osmond.

For those curious about their own digital habits, a quick quiz is available at Cyberguy.com to assess device and data protection.

According to CyberGuy, Donny Osmond’s career exemplifies the power of curiosity and innovation in the ever-evolving landscape of entertainment.

Athena Lunar Lander Reaches Moon; Condition Still Uncertain

Athena lunar lander successfully reached the moon, but mission controllers remain uncertain about its condition and landing site.

Mission controllers have confirmed that the Athena lunar lander successfully touched down on the moon, but the status of the spacecraft remains unclear. The landing occurred earlier on Thursday, yet officials have not been able to ascertain the condition of the lander or the precise location of its touchdown, according to a report from the Associated Press.

Athena, owned by Intuitive Machines, was equipped with an ice drill, a drone, and two rovers. While the lander has reportedly been able to communicate with its controllers, the details of its condition are still being evaluated. Tim Crain, mission director and co-founder of Intuitive Machines, was heard instructing his team to “keep working on the problem,” despite receiving apparent “acknowledgments” from the spacecraft in Texas.

The uncertainty surrounding Athena’s status follows a challenging history for Intuitive Machines. Last year, their Odysseus lander reached the moon but landed sideways, which added pressure to the current mission. Athena’s landing marks a significant milestone, as it is the second lunar craft to land this week, following Firefly Aerospace’s Blue Ghost, which successfully touched down on Sunday.

Firefly’s chief engineer, Will Coogan, celebrated the achievement, stating, “You all stuck the landing. We’re on the moon.” The successful landing of Blue Ghost has made Firefly Aerospace the first private company to place a spacecraft on the moon without it crashing or landing in an unstable position.

As the situation develops, NASA and Intuitive Machines concluded their online live stream and announced plans to hold a news conference later on Thursday to provide updates on Athena’s status.

According to the Associated Press, the outcome of this mission is being closely monitored as the space community awaits further information about the lander’s condition and operational capabilities.

Transfer Photos from Your Phone to a Hard Drive Easily

Learn how to transfer photos from your smartphone to a hard drive, freeing up space and avoiding costly cloud storage fees while maintaining access to your images.

For many smartphone users, the moment inevitably arrives when a notification alerts them that their device storage is nearly full. This often leads to a frantic search for ways to free up space, including deleting emails, clearing messages, and removing apps.

Many find themselves in this predicament due to automatic backups to services like Google Photos or iCloud, which offer limited free storage. Once that space is filled, users typically face a common dilemma: pay for additional storage or find an alternative solution.

Janice from Alabama recently reached out about her struggle with this issue, a situation that millions of smartphone users encounter annually. Fortunately, there is a viable option: transferring photos to a hard drive that you own. This method not only allows you to keep your images accessible but also helps you avoid ongoing subscription fees.

The simplest way to transfer your photos is to first copy them to a computer. From there, you can easily move them to an external hard drive. The process varies slightly depending on whether you are using an Apple or Android device.

For Apple users, the process involves importing photos through the Photos app on your computer rather than treating the phone as a storage device. If you are signed into iCloud and have iCloud Photos enabled on your iPhone, your photos may already be syncing automatically. In this case, you can access and download them directly from the Photos app on your Mac or through iCloud Photos in a web browser.

Once your photos are on your computer, create a backup by pasting the files into a designated folder. This step ensures you have a complete backup before transferring them to your hard drive. For Windows users, the process is straightforward, as Windows will copy your photos directly to your computer.

After your photos are safely stored on your computer, transferring them to an external hard drive is a quick task. External drives can accommodate tens of thousands of photos, depending on their capacity. For recommendations on the best external drives, visit Cyberguy.com.

If you prefer to skip the computer altogether, some flash drives can connect directly to smartphones. These drives typically come with a companion app that facilitates the transfer of photos from your phone to the drive. This option is particularly useful for those needing to free up space quickly. Check out our best flash drive recommendations at Cyberguy.com for more information.

After transferring your photos to a hard drive, take some time to organize them into folders. While hard drives are generally reliable, maintaining a second backup is advisable to protect your memories in case one drive fails.

Although cloud storage may seem inexpensive initially, the monthly fees can accumulate over time. In contrast, an external hard drive often costs less than a year or two of cloud storage fees. Once purchased, the storage is essentially free, and you retain full control over your photos rather than relying solely on a company’s server.

Janice’s inquiry reflects a common concern: do we really need to continue paying companies to store our own memories? The answer is no. With a simple cable and an affordable hard drive, you can free up space on your phone, keep every photo you want, and avoid ongoing storage fees. Once you familiarize yourself with the process, it becomes quick and routine.

Consider this: if your phone holds years of photos and videos, should those memories reside solely on a company’s cloud server, or should they be stored somewhere you fully control? For more tips and to share your thoughts, visit us at Cyberguy.com.

According to CyberGuy.com, taking control of your digital memories is not only feasible but also beneficial in the long run.

ISS Crew Member Plays Prank as SpaceX Team Arrives

Russian cosmonaut Ivan Vagner welcomed the Crew-10 astronauts to the International Space Station with a humorous twist, donning an alien mask during their arrival on March 16, 2025.

In a lighthearted moment aboard the International Space Station (ISS), Russian cosmonaut Ivan Vagner greeted the Crew-10 astronauts with a playful twist. As the SpaceX Crew Dragon capsule successfully docked at 12:04 a.m. EDT on March 16, 2025, Vagner welcomed the newcomers while wearing an alien mask, showcasing that even astronauts have a sense of humor.

The Crew-10 mission launched from NASA’s Kennedy Space Center in Florida at 7:03 p.m. on Friday, March 14, and arrived at the ISS approximately 29 hours later. As the ISS crew prepared the capsule for deboarding, Vagner was seen floating around in his alien disguise, complete with a hoodie, pants, and socks, creating a memorable and amusing atmosphere for the new arrivals.

NASA astronauts Anne McClain and Nichole Ayers, JAXA (Japan Aerospace Exploration Agency) astronaut Takuya Onishi, and Roscosmos cosmonaut Kirill Peskov entered the ISS shortly after the hatches between the space station and the SpaceX Dragon spacecraft were opened at 1:35 a.m. EDT. This moment was marked by the ringing of the ship’s bell, a tradition that signifies the arrival of new crew members.

Following the hatch opening, the Crew-10 astronauts floated into the station, where they were greeted with handshakes and hugs from the Expedition 72 crew, including Vagner. “It was a wonderful day. Great to see our friends arrive,” said Suni Williams, who was among those welcoming the newcomers.

Williams and fellow astronaut Butch Wilmore are expected to guide the new arrivals through the operations of the space station before they prepare to return home after a nine-month mission. Initially, their mission was scheduled to last only one week following the launch of Boeing’s first astronaut flight. However, complications led to a delay, forcing NASA to bring the Boeing Starliner back to Earth without a crew.

As part of the ongoing operations aboard the ISS, Crew-9 commander Nick Hague and Russian cosmonaut Aleksandr Gorbunov are scheduled to depart the station on Wednesday, March 19, at approximately 4 a.m. EDT, before splashing down off the coast of Florida.

This playful encounter highlights the camaraderie and lighthearted spirit that exists among astronauts, even in the challenging environment of space. Such moments not only provide entertainment but also strengthen the bonds between international crew members working together in orbit.

According to Fox News, the Crew-10 mission continues to exemplify the collaborative efforts of space agencies around the world as they explore the final frontier.

Condé Nast Technology Leader Sanjay Bhakta Joins Flatiron Software Board

Sanjay Bhakta, a prominent Indian American technology executive, has joined the board of Flatiron Software to guide the company’s strategic growth in software engineering and artificial intelligence.

Sanjay Bhakta, the Chief Product and Technology Officer at Condé Nast, has been appointed to the board of Flatiron Software. His role will focus on shaping the strategic growth of the software engineering and AI company.

Flatiron Software, based in Miami, Florida, is known for its ability to deliver on promises that larger firms often fail to fulfill. The company specializes in providing technology solutions for enterprises that cannot afford to make mistakes, emphasizing speed and scalability.

Bhakta brings over two decades of experience in technology leadership, having previously built and managed technology at major organizations such as HBO, Pearson, and AT&T. These companies are known for their complex environments where failure is not an option.

He joins a distinguished board that includes Rajiv Pant, former CTO of The New York Times and technology leader at The Wall Street Journal, Condé Nast, and Hearst.

“I’m excited to join Flatiron Software’s board at such a pivotal moment for the industry,” Bhakta stated. “The company has built a strong foundation for helping organizations navigate AI-driven transformation, and I look forward to contributing my experience to accelerate that impact.”

Bhakta’s appointment is part of Flatiron’s strategic investment in building a board equipped to guide the company through its next growth phase. As demand for AI-augmented software development and strategic technology consulting increases, Flatiron is positioning itself with leadership that has not only witnessed digital transformation but has also driven it.

Currently, Bhakta leads Condé Nast’s global technology and product strategy. Throughout his career, he has transformed how large organizations build and deliver technology. His expertise includes scaling engineering teams, modernizing digital infrastructure, and fostering conditions for sustained innovation.

Bhakta has a proven track record of overseeing global teams of over 1,000 engineers and managing technology budgets exceeding $250 million. His approach consistently emphasizes measurable business outcomes rather than technology for its own sake.

At HBO, he was instrumental in building and leading the end-to-end digital media supply chain that powered HBO GO and HBO NOW. This mission-critical operation required both deep technical expertise and sharp strategic judgment.

During his tenure at Pearson, Bhakta spearheaded the company’s digital transformation, successfully transitioning it from a traditional publishing giant to a platform-first, cloud-native organization. Across all his roles, Bhakta has maintained a focus on making technology work harder for the business and the people it serves.

His extensive experience and strategic insight are expected to play a crucial role in Flatiron Software’s continued growth and innovation in the rapidly evolving technology landscape, according to a media release.

The announcement of Bhakta’s appointment underscores Flatiron’s commitment to enhancing its leadership infrastructure as it navigates the complexities of the AI-driven market.

For more information, refer to The American Bazaar.

Android Addresses 129 Security Vulnerabilities in Major Update

Google’s latest Android update addresses 129 security vulnerabilities, including a zero-day flaw linked to Qualcomm chips that has already been exploited in targeted attacks.

Google has rolled out a significant Android update that fixes a total of 129 vulnerabilities, including a critical zero-day flaw associated with Qualcomm chips that has already been exploited in attacks.

For many users, Android security updates often go unnoticed until a headline like this emerges. Suddenly, the device used for messaging, banking, and work becomes part of a broader cybersecurity narrative. This week, Google’s latest Android security updates have highlighted the importance of timely software maintenance.

Among the vulnerabilities addressed, one particular flaw has caught the attention of security researchers. Tracked as CVE-2026-21385, this zero-day vulnerability is concerning because it has already been utilized in targeted attacks. Attackers discovered this flaw before many devices had received a fix, which poses a significant risk to users.

The issue is linked to the graphics processing component in many Qualcomm chipsets. Specifically, it involves an integer overflow, a type of calculation error that can lead to memory corruption within the system. Once this occurs, attackers may gain unauthorized access to the device.

Qualcomm has indicated that this flaw affects 235 different chipsets, meaning a wide range of Android phones could potentially be impacted. Google’s Threat Analysis Group identified the issue and reported it through coordinated disclosure practices, prompting Qualcomm to collaborate with device manufacturers to implement necessary patches.

The implications of this Android security vulnerability are serious. Several of the patched vulnerabilities allow attackers to execute code remotely or gain elevated privileges on a device. One particular flaw within the Android System component is especially alarming, as it could enable remote code execution without any user interaction. This means an attacker could exploit the flaw without requiring the victim to click a link or install an app, making it one of the most dangerous types of vulnerabilities.

The March Android security bulletin addresses ten critical flaws across the System, Framework, and Kernel components. These core components are essential to Android’s functionality, so any weaknesses can have widespread repercussions across millions of devices.

Google has released two patch levels for this update. The second update encompasses everything in the first, in addition to fixes for extra hardware components and third-party software. Google Pixel devices typically receive updates immediately, while many other Android users may experience delays.

Phone manufacturers such as Samsung, Motorola, and OnePlus often need to test the patches before they are released for specific models. Additionally, carriers may delay updates to ensure compatibility. Consequently, some users receive security patches promptly, while others may have to wait weeks.

To protect your Android phone from security threats, there are several proactive steps you can take. First, install Android updates as soon as they become available. Regularly check for updates by navigating to Settings, tapping on Security and Privacy or Software Update, and selecting Check for Updates.

Second, avoid downloading apps from unknown sources. Stick to trusted stores like Google Play, as third-party app stores can pose a higher risk of malware.

Third, keep Google Play Protect enabled. This built-in malware protection scans apps for malicious behavior and alerts you to any suspicious activity. However, it is important to note that Google Play Protect is not infallible. Therefore, consider using robust antivirus software for an additional layer of protection.

Additionally, set a strong passcode on your phone and enable fingerprint or face unlock features if available. This helps safeguard your device in case it is lost or stolen. Lastly, exercise caution with suspicious links, as many attacks begin with phishing messages. Avoid clicking on unknown links in texts, emails, or social media messages.

This recent Android update underscores the complexities of modern mobile security. Google’s Threat Analysis Group frequently uncovers vulnerabilities that may already be exploited in real-world scenarios. These findings trigger coordinated responses involving chip manufacturers, device makers, and security researchers. In this instance, Qualcomm received the report in December and provided fixes to device manufacturers in early 2026.

While the process may appear slow from the outside, it involves numerous companies collaborating to prevent widespread exploitation. Security updates may not seem exciting, but they are crucial for protecting billions of smartphones globally.

This latest Android update serves as a stark reminder of the importance of timely software updates. A zero-day flaw linked to Qualcomm graphics hardware was already being targeted before many users were even aware of its existence. Installing updates promptly is one of the simplest yet most effective ways to protect your device and personal data.

So, the next time your Android device prompts you to install a security patch, consider this: Do you install it immediately, or do you tap “remind me later”?

For further information, consult CyberGuy.com.

Drone Technology and AI Transforming Modern Warfare Tactics

Artificial intelligence and advanced computer vision are revolutionizing drone capabilities, reshaping modern warfare, and redefining the dynamics of the battlefield.

As an ophthalmologist and technology commentator, I have been captivated by the transformative impact of artificial intelligence (AI) and computer vision on drone technology and its implications for modern warfare. In this new era of conflict, the advantage lies not solely with the largest bombers or stealth fighters, but with drones that possess the ability to see and act with superhuman precision.

Unmanned aerial vehicles (UAVs), once merely remote-controlled flying cameras, have evolved into autonomous warriors. Their vision systems, powered by AI, are now central to defining military strategy, tactics, and geopolitical maneuvers. This transformation is particularly evident in the ongoing conflict in Iran, where drones have inundated the airspace, turning it into a contested battlefield dominated by AI-driven vision and autonomous targeting.

The evolution of drones has been remarkable. From the early days of unmanned flight, which began with Austrian explosive balloons in 1849, to the World War I Kettering Bug and the mass-produced Radioplane OQ-2, the groundwork for contemporary aerial systems was laid. By the 1970s, platforms like Israel’s Tadiran Mastiff showcased the potential of real-time video surveillance. Today, drones operate across both civilian and military domains, transitioning from passive cameras to intelligent agents capable of interpreting their surroundings, making decisions, and executing complex missions.

The integration of AI and computer vision has revolutionized drone capabilities. Modern drones can autonomously avoid collisions, detect and track objects, navigate intricate environments, and create three-dimensional maps for mission planning. In military contexts, these vision systems facilitate real-time reconnaissance, target identification, adaptive mission execution, and swarm tactics that can overwhelm defenses. By combining rapid data processing with autonomous decision-making, drones extend human perception, operate in hazardous conditions, and perform tasks that would be perilous for human operators.

Human vision is remarkably sophisticated, adapting instantly to varying light conditions, interpreting depth and motion, and integrating context, memory, and experience to recognize patterns and make quick decisions. Soldiers spotting camouflage, pilots navigating shifting terrain, and commanders assessing intent rely on these faculties daily. In contrast, drone vision is engineered for speed, scale, and consistency. Modern drones utilize AI-powered systems that combine high-resolution cameras, infrared sensors, and sometimes LIDAR to capture visual data. Neural networks analyze this information in real-time, detecting objects, calculating movement, and predicting hazards.

Unlike humans, drones can track hundreds of objects simultaneously, operate in total darkness or inclement weather, and process inputs in milliseconds. While humans excel at interpretation, drones dominate in relentless detection and rapid reaction.

At the heart of today’s military drones is computer vision. Cameras, infrared sensors, and LIDAR feed streams of visual data into convolutional neural networks (CNNs) and other AI models that classify targets, estimate distances, and prioritize threats. This data fusion creates three-dimensional maps for navigation, obstacle avoidance, and autonomous target tracking. In conflict zones like Iran, this capability allows drones to detect incoming threats, evade counter-fire, and hunt other drones with minimal human oversight. Unlike human eyes, which interpret context and cues, drone AI converts raw pixels into actionable intelligence at speeds unmatched by human operators.

The use of low-cost attack drones in swarms by Iran has posed significant challenges to traditional U.S. and allied air defenses. These drones employ a saturation tactic: deploying hundreds of inexpensive, autonomous drones equipped with vision systems that can overwhelm radar and missile batteries, forcing costly interceptors to neutralize relatively low-cost threats. This has prompted the U.S. and Gulf allies to adopt AI-powered interceptors and collaborate with Ukraine, which has pioneered similar drone countermeasures during its conflict with Russia. Expertise from Ukraine is now in high demand as nations scramble to defend against Iran’s swarm drone tactics. Drone vision has evolved into a force multiplier, a shield, and a weapon all in one.

Despite the sophistication of AI-powered drone vision, human oversight remains crucial. Human perception brings context, ethical reasoning, and intuition that machines cannot replicate. Commanders must interpret intent, weigh collateral impact, and make strategic decisions. However, drones increasingly blur the line: AI vision enables autonomous detection, tracking, and engagement, performing in milliseconds what would take humans much longer. The result is a battlefield where the ability to see first and act fastest can decisively alter outcomes.

Current drones that rely on computer vision and machine learning still face limitations in context and interpretation, which highlight the challenges of today’s AI models. While AI systems excel at recognizing visual patterns, they often lack a deeper understanding of meaning, intent, and cultural context. For instance, a neural network trained to identify buildings might classify structures based on shapes or rooftops, but a school, mosque, temple, hospital, or apartment complex can appear visually similar from the air. Without additional contextual data—such as signage, activity patterns, or human oversight—the model may misclassify a building, particularly in conflict zones where training data may be limited or biased.

Another limitation is that AI models struggle with generalization and ambiguity. Many vision systems are trained on large datasets, but these datasets may not encompass the diversity of buildings, cultural architecture, or real-world conditions found in conflict zones. A mosque dome might be mistaken for another round structure, or a school playground might be confused with a public courtyard. Models can also fail when buildings are partially damaged, obscured by smoke or shadows, or when viewing angles change.

Because neural networks rely on statistical patterns rather than true understanding, they can make confident but incorrect predictions, underscoring the need for human oversight in military drone operations. These limitations highlight a key challenge in AI vision: recognizing objects is not the same as understanding their significance in the real world.

China currently dominates the global drone manufacturing market, producing the majority of commercial and consumer unmanned aerial vehicles and supplying key technologies that have shaped global markets. Government-backed industrial policy and subsidies have enabled Chinese firms to control approximately 90% of the global consumer drone market and over 70% of enterprise drones. In contrast, India is emerging as one of the fastest-growing drone markets in the Asia-Pacific region, with projected market value expected to rise from hundreds of millions to several billion dollars over the next decade. While Indian manufacturers are scaling up and benefiting from innovation, much of the current supply chain still relies on imported components, and local production has not yet reached the level of China’s integrated drone ecosystem.

In the defense sector, the United States is rapidly working to catch up, particularly as drones play an increasingly central role in conflicts like the Iran war. High-profile private investment is now intertwined with national strategy, as evidenced by Eric Trump and Donald Trump Jr. backing a domestic drone venture called Powerus, which aims to supply advanced autonomous systems to the Pentagon amid rising military demand and bans on Chinese imports.

To enhance drone capabilities, significant improvements in vision systems are necessary. Drones require better three-dimensional perception and depth understanding to navigate safely through complex environments without GPS. Enhanced object recognition in low light, adverse weather, smoke, or partial obstructions will enable them to operate where humans and current sensors struggle. Drones also need real-time scene understanding to interpret context—distinguishing civilians from combatants, moving vehicles from obstacles, or recognizing dangerous areas—and long-range visual tracking to follow multiple moving targets and predict their movements.

Integrating AI-powered autonomous decision-making will allow drones to interpret complex visual data and make mission-critical choices without human input. Swarm coordination and distributed vision will enable groups of drones to share visual information, create a unified environmental map, detect threats collectively, and execute coordinated strategies. Miniaturization and energy-efficient computing will allow drones to carry these advanced vision systems without sacrificing flight time or maneuverability, unlocking fully autonomous and intelligent flight in challenging environments.

In this new reality, dominance in the sky is defined not just by the size of the aircraft fleet but by the effectiveness of drones in seeing, interpreting, and responding to threats. AI-driven drone vision has become the defining edge in modern warfare, and countries that fail to integrate these advancements risk falling behind.

The ongoing conflict in Iran illustrates a broader trend: nations now face adversaries capable of deploying swarms of low-cost, AI-guided drones that can evade defenses and strike critical targets. Vision-powered drones are prompting a reevaluation of air power, air defense, and tactical doctrine.

According to The American Bazaar, the future of warfare will increasingly hinge on the capabilities of intelligent drones and their vision systems.

Former Meta AI Scientist Secures Over $1 Billion for Human-Centric AI

A former Meta AI scientist has raised over $1 billion to advance artificial intelligence systems that prioritize human-like reasoning and understanding.

A former Meta AI scientist has successfully secured significant funding to support his mission of making artificial intelligence (AI) more human-centric. Advanced Machine Intelligence, a startup founded by Yann LeCun, the former chief AI scientist at Meta Platforms, announced on Tuesday that it has raised $1.03 billion based on a pre-money valuation of $3.50 billion. The company aims to commercialize AI systems that focus on reasoning, planning, and developing “world models.”

Yann André LeCun is a prominent French-American computer scientist recognized for his pivotal contributions to the field of artificial intelligence. Born on July 8, 1960, in France, LeCun earned his engineering diploma and later obtained a PhD, embarking on a distinguished career in AI research. He is particularly known for his foundational work in deep learning, including the development of convolutional neural networks (CNNs), which have become essential in modern computer vision, image recognition, and machine learning. In recognition of his contributions, LeCun shared the 2018 ACM Turing Award with fellow AI pioneers Yoshua Bengio and Geoffrey Hinton, marking a significant milestone in the evolution of AI technology.

LeCun joined Facebook, now known as Meta Platforms, in 2013, where he co-founded the Facebook AI Research (FAIR) lab. He later served as Meta’s Chief AI Scientist, guiding long-term research and innovation in the field. In addition to his industry work, LeCun holds academic positions, including a professorship at New York University, where he continues to teach and conduct research.

The recent funding round for Advanced Machine Intelligence was co-led by notable investors, including Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Such substantial investments indicate strong market confidence in technologies that aim to expand AI capabilities beyond mere pattern recognition, venturing into areas such as reasoning, planning, and understanding complex systems.

Advanced Machine Intelligence is strategically targeting organizations that operate complex systems, including manufacturers, automakers, aerospace companies, biomedical firms, and pharmaceutical groups. “We want to become the main provider of intelligent systems, regardless of what the application is,” LeCun stated, emphasizing the company’s ambitious goals.

This development aligns with a broader trend within the AI industry, reflecting a shift toward creating systems that can model and interpret the real world in a manner that mimics human understanding. These “world-model” approaches have the potential to enhance AI adaptability and usefulness in high-stakes or unpredictable environments. By integrating reasoning and planning capabilities into AI systems, the company aims to accelerate automation in critical sectors, improve problem-solving in complex scenarios, and foster more sophisticated human-machine collaboration.

From an economic standpoint, the significant venture funding directed toward projects like Advanced Machine Intelligence underscores the strategic importance of AI as both a technological and competitive asset. Organizations and industries that effectively adopt advanced AI tools may experience substantial advantages in productivity, innovation, and decision-making.

The future of AI appears poised for transformation as companies like Advanced Machine Intelligence work to create systems that not only perform tasks but also understand and navigate the complexities of the world in a more human-like manner. This evolution could redefine the landscape of artificial intelligence and its applications across various sectors.

According to The American Bazaar, this funding marks a significant step forward in the quest to develop AI technologies that are more aligned with human reasoning and understanding.

Researchers Identify Source of Black Hole’s 3,000-Light-Year Jet Stream

A new study connects the M87 black hole to its powerful cosmic jet, revealing how it launches particles at nearly the speed of light.

A recent study has successfully linked the renowned M87 black hole, the first black hole ever captured in an image, to its impressive cosmic jet. This research sheds light on the mechanisms behind the black hole’s ability to launch particles at nearly the speed of light.

Published in the journal “Astronomy & Astrophysics,” the findings reveal that scientists have traced a 3,000-light-year-long cosmic jet back to its likely source point. This breakthrough was made possible through “significantly enhanced coverage” provided by the global Event Horizon Telescope network.

M87, a supermassive black hole located in the Messier 87 galaxy, is approximately 55 million light-years from Earth and boasts a mass 6.5 billion times that of the sun. The first image of M87 was unveiled to the public in 2019, following data collection by the Event Horizon Telescope in 2017.

Dr. Padi Boyd of NASA highlighted the significance of the discovery, noting that M87 is not only supermassive but also active. “Just a few percent are active at any given time,” she explained in a video about the black hole. “Are they turning on and then turning off? That’s an idea… We know there are very high magnetic fields that launch a jet. This image provides observational evidence that what we’ve been seeing for a while is actually being launched by a jet connected to that supermassive black hole at the center of M87.”

The black hole is known to consume surrounding gas and dust while simultaneously emitting powerful jets of charged particles from its poles, which form the extensive jet stream. This dual behavior has been reported by outlets such as Scientific American and Space.com.

Saurabh, the team leader at the Max Planck Institute for Radio Astronomy, stated, “This study represents an early step toward connecting theoretical ideas about jet launching with direct observations.” He emphasized the importance of identifying the jet’s origin and its connection to the black hole’s shadow, calling it a crucial piece in understanding how the central engine operates.

The Event Horizon Telescope is a global network of eight radio observatories that work together to detect radio waves emitted by astronomical objects, such as galaxies and black holes. This collaboration effectively creates an Earth-sized telescope capable of capturing detailed images and data.

The term “Event Horizon” refers to the boundary surrounding a black hole beyond which no light can escape, as defined by the National Science Foundation.

The recent findings stem from data collected by the Event Horizon Telescope in 2021. However, the authors of the study caution that while the results are robust under the assumptions and tests performed, definitive confirmation and more precise constraints will necessitate future observations with higher sensitivity. This will require additional stations and an expanded frequency range to improve intermediate-baseline coverage.

As researchers continue to explore the mysteries of black holes, these findings represent a significant advancement in our understanding of how these cosmic giants operate and influence their surroundings, according to Space.com.

Fake Google Gemini AI Promotes ‘Google Coin’ Cryptocurrency Scam

Scammers are leveraging a fake AI chatbot impersonating Google’s Gemini to promote a fraudulent cryptocurrency called “Google Coin,” according to researchers from Malwarebytes.

In an alarming development in the world of cryptocurrency scams, security researchers at Malwarebytes have uncovered a fraudulent website promoting a non-existent cryptocurrency called “Google Coin.” This site features a chatbot that falsely claims to be Google’s Gemini AI, designed to lure unsuspecting investors into making cryptocurrency payments.

The scam operates under the guise of an official Google product, complete with familiar branding and visuals that create an illusion of legitimacy. Visitors to the site interact with a chatbot that introduces itself as “Gemini, your AI assistant for the Google Coin platform.” This interaction is crafted to convince users they are engaging with a credible Google service.

When users pose investment-related questions, the chatbot responds with specific financial projections, claiming that purchasing 100 tokens at $3.95 each could yield returns exceeding $2,700 once the coin is “listed.” The site employs deceptive tactics, such as fake progress counters and countdowns, to create a sense of urgency and credibility. Once a user clicks “Buy,” they are directed to send Bitcoin to a specified wallet address, with the transaction being final and irreversible.

It is crucial to note that there is no official “Google Coin.” The entire operation is a sophisticated scheme designed to siphon cryptocurrency from unsuspecting individuals. This scam effectively combines brand impersonation with artificial intelligence to enhance its credibility. The scammers have meticulously crafted a website that mimics Google’s aesthetic, employing logos and technical jargon that further mislead potential victims.

The chatbot is programmed with a tightly controlled script, confidently answering inquiries while avoiding any admission of risk. If users inquire about company registration or regulatory compliance, the chatbot deflects with vague assurances regarding security and transparency. This interaction is not with a clumsy scammer but with software engineered to persuade users around the clock. The chatbot can simultaneously engage with hundreds of individuals, providing personalized responses and nudging them toward sending cryptocurrency.

The interactive nature of this scam poses a significant risk, as it can lower users’ defenses. When a chatbot responds in real time, it can create an illusion of professionalism and reliability. Many individuals may think, “If this were fake, it wouldn’t sound so convincing.” However, this is precisely the tactic employed by scammers to instill confidence.

For those who fall victim to this scheme, the financial repercussions can be immediate and irreversible. Unlike credit card transactions, cryptocurrency payments cannot be reversed. There is no customer support line to contact, and no refund process available. Furthermore, engaging with a scam site may result in personal information, such as email addresses and wallet details, being circulated among fraud networks, increasing the likelihood of future scams targeting the victim.

Researchers at Malwarebytes emphasize the growing sophistication of crypto scams, particularly those utilizing AI tools to create polished and seemingly legitimate investment opportunities. However, there are steps individuals can take to mitigate their risk before investing or sending any digital currency.

First and foremost, if a cryptocurrency claims to be launched by a well-known company, it is essential to verify the information directly on the company’s official website. Major corporations typically announce significant financial products publicly. If confirmation cannot be found on the legitimate domain, it is prudent to assume the offering is fraudulent and to walk away.

Additionally, any investment that promises guaranteed returns or specific future prices should raise red flags. Real investments inherently carry risks and uncertainties, and promises of quick, predictable profits are classic indicators of scams.

Utilizing a password manager can also enhance security by generating strong, unique passwords for each account and securely storing them. This precaution can prevent scammers from accessing other accounts if they manage to trick users into providing credentials on a fake site. Many password managers also alert users if their information appears in known data breaches.

Employing robust antivirus software is another layer of protection, as it can help detect malicious websites, phishing attempts, and suspicious downloads before they can cause harm. This can prevent hidden malware from being installed while users are distracted by convincing scam pitches.

Identity theft protection services can monitor personal information, such as Social Security numbers and email addresses, alerting users if their data is being misused. If scammers collect personal details through a fraudulent investment site, early alerts can facilitate prompt action to mitigate financial damage.

Data removal services can assist in removing personal information from public data broker sites. The less information available online, the harder it becomes for scammers to target individuals with personalized pitches. Reducing one’s digital footprint can significantly lower exposure to fraud.

Before sending any cryptocurrency, it is advisable to pause and independently verify the recipient. Searching for reviews, warnings, and official announcements can help identify potential scams. If an investment opportunity creates a sense of urgency, such as countdowns or “final stage” messages, this should be treated as a warning sign.

As scammers increasingly employ sophisticated tactics, including artificial intelligence, to create polished and persuasive narratives, awareness remains a powerful tool. By taking a moment to verify claims, question guaranteed returns, and utilize protective tools, individuals can significantly reduce their risk of falling victim to scams.

For more information on this issue, refer to the findings from Malwarebytes.

Meta Smart Glasses Face Increasing Privacy Concerns Among Users

Meta’s AI smart glasses have raised significant privacy concerns after reports revealed that contractors in Kenya may have viewed sensitive footage captured by the devices.

Meta’s AI smart glasses, designed to seamlessly integrate technology into daily life, are facing serious scrutiny following allegations of privacy violations. An investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that contractors reviewing AI data in Nairobi, Kenya, may have accessed highly personal footage captured by the smart glasses. This footage reportedly includes intimate moments such as bathroom visits and sexual activity, raising alarms about user privacy and the ethical implications of AI training.

The controversy stems from the role of AI annotators—workers who review images, videos, or audio to help artificial intelligence systems learn and improve. These annotators play a crucial role in training AI by labeling content and verifying responses. According to the investigation, some of these workers have reported viewing videos recorded by Meta’s smart glasses, which can include sensitive scenes from everyday life. One annotator described seeing everything from living rooms to naked bodies, while another noted that although faces are supposed to be automatically blurred, this feature sometimes fails, leaving identities exposed. Additionally, some clips allegedly revealed credit cards and other sensitive information.

Many users may assume that AI systems learn autonomously, but human input is often essential for their development. Meta’s smart glasses feature an AI assistant that responds to user inquiries about their surroundings, such as identifying landmarks or explaining objects. To ensure accuracy, the system sometimes relies on training data reviewed by human contractors.

In response to the allegations, a Meta spokesperson stated, “Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.” The spokesperson added that when users do share content, contractors may review this data to enhance user experience, a practice common among many tech companies. Meta claims to implement measures to filter data and protect user privacy.

The Ray-Ban Meta glasses are equipped with an LED indicator light that activates when photos or videos are being recorded, alerting those nearby that content is being captured. Furthermore, the company’s terms of service emphasize that users are responsible for adhering to applicable laws and using the glasses in a respectful manner, which includes avoiding harassment and respecting privacy rights.

Meta has also been in contact with Sama, a company that provides AI data annotation services. According to Meta, Sama has stated it is unaware of any workflows involving the review of sexual or objectionable content or instances where faces or sensitive details remain unblurred. Meta is continuing to investigate the matter.

This controversy arises as Meta expands the capabilities of its AI glasses, developed in collaboration with eyewear giant EssilorLuxottica. The glasses, which include a camera and an AI assistant, have seen a surge in sales, with reports indicating over 7 million pairs sold in 2025—a significant increase compared to previous years. However, alongside this growth, Meta has updated its privacy policies, including changes that keep AI camera features active unless users disable the “Hey Meta” voice command and remove the option to opt out of storing voice recordings in the cloud. For privacy advocates, these updates heighten concerns regarding user data protection.

The recent findings underscore a critical reality for users of smart glasses and similar wearable technology: AI devices often collect more information than users may realize. When users share content with AI systems, human reviewers may analyze that material to improve the technology, meaning that footage captured by users could be viewed by others during the training process. Moreover, wearable cameras can inadvertently record private moments, and while companies implement tools to blur faces or obscure identifying details, these systems are not infallible. As privacy policies evolve with the introduction of new AI features, staying informed about these changes is essential for users to assess their comfort level with the technology.

As smart glasses transition from novelty items to everyday gadgets, the appeal of having AI assist in understanding the world around us is undeniable. However, the same technology that enhances these devices also raises complex privacy issues. The presence of always-accessible cameras, AI systems that learn from real-world footage, and human reviewers involved in training these systems create a data chain that many users may not fully consider.

This raises a pivotal question: Would you feel comfortable wearing AI glasses knowing that someone, potentially halfway around the world, might review the footage your device captures? The implications of such technology warrant careful consideration as we navigate the intersection of innovation and privacy.

For further insights and updates on technology and privacy, visit CyberGuy.com.

Spectacular Blue Spiral Light Likely Originates from SpaceX Rocket

A stunning blue spiral light, likely from a SpaceX Falcon 9 rocket, illuminated the night sky over Europe, captivating viewers and sparking discussions on social media.

A mesmerizing blue light, reminiscent of a cosmic whirlpool, lit up the night sky over Europe on Monday. This extraordinary phenomenon was captured in striking video footage and is believed to have been caused by the SpaceX Falcon 9 rocket booster re-entering the Earth’s atmosphere.

The time-lapse video, recorded in Croatia around 4 p.m. EST (9 p.m. local time), showcases the glowing spiral as it traverses the sky. Many social media users compared the sight to a spiral galaxy, highlighting its ethereal beauty. The full video, when played at normal speed, lasts approximately six minutes.

The Met Office in the U.K. reported receiving numerous accounts of an “illuminated swirl in the sky.” Experts indicated that the light was likely a result of the SpaceX rocket, which had launched from Cape Canaveral, Florida, at around 1:50 p.m. EST as part of the classified NROL-69 mission for the National Reconnaissance Office (NRO), the U.S. government’s intelligence and surveillance agency.

“This is likely to be caused by the SpaceX Falcon 9 rocket, launched earlier today,” the Met Office stated on X. “The rocket’s frozen exhaust plume appears to be spinning in the atmosphere and reflecting the sunlight, causing it to appear as a spiral in the sky.”

This glowing spectacle is often referred to as a “SpaceX spiral,” according to Space.com. Such spirals occur when the upper stage of a Falcon 9 rocket separates from its first-stage booster. As the upper stage continues its journey into space, the lower stage descends back to Earth, releasing any remaining fuel. This fuel freezes almost instantly at high altitudes, and sunlight reflects off the frozen particles, creating the striking glow observed in the sky.

Fox News Digital reached out to SpaceX for further comment but did not receive an immediate response. The stunning display in the sky came just days after a SpaceX team, in collaboration with NASA, successfully returned two astronauts who had been stranded in space.

The captivating blue spiral not only delighted onlookers but also served as a reminder of the intricate and often spectacular phenomena associated with space exploration and rocket launches. As technology continues to advance, such displays may become more common, sparking curiosity and wonder among those who gaze upward.

According to Space.com, these phenomena highlight the remarkable interplay between human ingenuity and the natural world, as we continue to push the boundaries of what is possible in space travel.

Beware of Extortion Scam Emails Claiming Your Data Is Compromised

Experts warn that extortion scam emails claiming hackers have stolen personal data are flooding inboxes, preying on fear and urgency to manipulate victims into paying ransoms in Bitcoin.

In recent weeks, a wave of extortion scam emails has inundated inboxes across the globe, with scammers claiming to have stolen sensitive personal information. These emails often create a sense of urgency and fear, leaving recipients feeling vulnerable and anxious about their digital security.

One reader, Bobby D, reached out after receiving a particularly alarming message. “I received the attached email, and I’m wondering what to do. I have the capability to mark it as Spam with my email provider, Earthlink. Because of its threatening nature, is there any other type of action you can recommend?” he asked. “I was wondering if just designating it as spam, there really would be no deterrence for the sender?”

The content of these emails is designed to unsettle recipients. They often claim to possess complete personal information, threatening to sell it on the dark web unless a ransom—typically demanded in Bitcoin—is paid quickly. The message may read something like, “I have your complete personal information… I will send this package to dark net markets… Or you can buy it from me for 1000 USD in Bitcoin…”

If this scenario sounds familiar, you are not alone. These extortion emails are part of a widespread campaign targeting thousands of individuals. The messages are crafted to sound credible and detailed, but upon closer inspection, the warning signs become apparent.

Scammers often fail to provide any concrete evidence of their claims. There are no screenshots, passwords, or files attached to substantiate their threats. Instead, they rely on vague phrases like “a multitude of files” and “your devices,” which sound dramatic but lack specificity. In contrast, legitimate data breaches typically include detailed information.

Moreover, any email demanding payment in Bitcoin while advising recipients not to inform anyone follows a classic scam formula. Reputable companies do not operate in this manner. It is crucial to understand that these emails are not personal attacks; they are mass-produced messages sent to countless addresses simultaneously, with the hope that a small percentage of recipients will be frightened enough to comply.

It is essential to recognize that your email address may have appeared in a previous data breach, but this does not mean that your devices or accounts have been compromised. Scammers purchase lists of leaked emails and send out these threatening messages in bulk. Even a single successful payment can make the entire operation profitable for them.

If you receive one of these emails, here is the recommended course of action:

Do not respond. Engaging with the sender confirms that your email address is active, which may lead to further threats.

Do not pay the ransom. Paying does not guarantee your safety; it only indicates that the scam has worked.

Instead, flag the email as spam with your email provider, such as EarthLink. This action helps train spam filters and reduces the likelihood of similar messages reaching you and others in the future. Once reported, delete the email and move on. To Bobby’s question, marking it as spam is indeed helpful. While it may not stop the individual sender, it contributes to the broader effort to combat these scams.

While it is impossible to prevent scammers from attempting to exploit individuals, there are steps you can take to protect yourself. Reusing passwords across multiple accounts increases the risk associated with data breaches. Utilizing a password manager can help you create and store strong, unique passwords for each of your accounts.

Additionally, check if your email has been exposed in past breaches. Some password managers include built-in breach scanners that can alert you if your information has been compromised. If you find that your email or passwords have appeared in known leaks, change any reused passwords immediately and secure those accounts with new, unique credentials.

Implementing two-factor authentication (2FA) adds an extra layer of security, even if your password is leaked. Regular updates to your software and applications can also close security gaps that scammers exploit.

Consider using data removal services to limit the amount of personal information available online. By reducing the information accessible to scammers, you make it more challenging for them to cross-reference data from breaches with what they may find on the dark web.

Never click on links in threatening emails. Strong antivirus software can help block malicious sites and fake support pages. The best way to protect yourself from harmful links that could install malware is to ensure you have robust antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, safeguarding your personal information and digital assets.

Scam emails thrive on panic and urgency. Taking a moment to verify the legitimacy of a message can diminish its power. Many people question whether marking these emails as spam is effective. It is. Spam reports assist email providers in identifying patterns, blocking sender networks, and reducing future scam attempts. While you may not stop the individual scammer, your actions contribute to the protection of others.

Ultimately, extortion scam emails succeed by exploiting fear. They aim to prompt quick, unconsidered actions. By pausing to question the message and verifying its authenticity, you can defuse the threat. No files have been stolen, and no devices have been hacked—just a recycled script designed to instill fear. If you have received one of these emails, you have done the right thing by stopping and seeking advice.

Have you ever encountered a threatening email that initially caused you distress before you realized it was a scam? What helped you identify it, or what would you do differently next time? Share your experiences with us at Cyberguy.com.

According to CyberGuy.com, staying informed and vigilant is the best defense against these types of scams.

Pentagon’s AI Initiatives: A New Frontier in Defense Technology

The Pentagon’s ongoing battle over artificial intelligence will significantly influence the future of military technology and its implications for global power dynamics.

The Fox News AI Newsletter highlights the latest advancements in artificial intelligence technology, focusing on the challenges and opportunities that AI presents both now and in the future.

In this edition, we explore the Pentagon’s ongoing AI battle, which is poised to determine who controls the most powerful military technologies. As AI continues to evolve, its integration into defense systems raises critical questions about security, ethics, and global power dynamics.

Additionally, researchers at Imperial College London are developing an innovative AI-powered T-shirt designed to monitor heart health over extended periods. This groundbreaking garment aims to detect inherited heart rhythm disorders that often go unnoticed until they pose significant health risks.

In an opinion piece, Margaret Spellings emphasizes the urgency for American schools to prepare students for an AI-driven future. She notes that the rapid pace of technological change is reshaping the workforce and economy, leaving educational systems struggling to keep up.

Steve Forbes also weighs in, arguing that the nation that establishes the standards for AI will shape the future. He warns that while America has historically set the rules in various industries, China is poised to take the lead in the AI arena.

On the digital front, Microsoft has announced a new technical blueprint aimed at verifying the authenticity of online content. This initiative comes in response to the growing prevalence of misleading information on social media platforms.

In a significant move, major tech companies have backed President Donald Trump’s Ratepayer Protection Pledge, committing to absorb the costs associated with running energy-intensive AI data centers. This agreement, which includes companies like Google, Microsoft, and Amazon, aims to prevent these expenses from being passed on to consumers.

Moreover, new policies on the social media platform X are set to penalize creators who share AI-generated videos of armed conflicts without proper disclosure. This initiative seeks to combat misinformation and manipulation in online content.

Lastly, X’s AI chatbot, Grok, has begun rolling out its beta version, Grok 4.20. Elon Musk and the X team claim this update will enhance performance and introduce new features while aiming to minimize perceived political bias.

The debate surrounding the energy consumption of data centers continues to grow, as these facilities are crucial for powering AI, search engines, and various online services that people rely on daily.

Stay informed about the latest advancements in AI technology and the challenges and opportunities it presents by following the Fox News AI Newsletter.

According to Fox News, the implications of AI technology are vast and multifaceted, impacting everything from military strategy to personal health monitoring.

Wolf Species Made Famous in ‘Game of Thrones’ Revived, Company Claims

A Dallas-based company claims to have resurrected the dire wolf, an extinct species made famous by “Game of Thrones,” using advanced genetic technologies.

A Dallas-based biotechnology company, Colossal Biosciences, has announced that it has successfully brought back the dire wolf, a species that last roamed the Earth over 12,500 years ago. The dire wolf gained popularity through the hit HBO series “Game of Thrones,” where it is depicted as a larger and more intelligent version of the common wolf, fiercely loyal to the Stark family.

Colossal Biosciences asserts that it has created three dire wolves through genome-editing and cloning techniques, marking what it claims to be the world’s first successful “de-extinction” of an animal. However, some experts question the validity of this claim, suggesting that the company has merely genetically modified existing gray wolves rather than truly resurrecting the extinct species.

According to Colossal, dire wolves inhabited the American midcontinent during the Ice Age, with the oldest confirmed fossil dating back approximately 250,000 years, discovered in the Black Hills of South Dakota. The three new wolves include two adolescent males named Romulus and Remus, and a female puppy called Khaleesi.

The scientists at Colossal utilized blood cells from a living gray wolf and employed CRISPR technology—short for “clustered regularly interspaced short palindromic repeats”—to make genetic modifications at 20 different sites. These alterations were designed to replicate traits believed to have helped dire wolves survive in cold climates, such as larger body size and longer, lighter-colored fur. Of the 20 edits made, 15 correspond to genes found in actual dire wolves.

The ancient DNA used for the project was extracted from two dire wolf fossils: a tooth from Sheridan Pit, Ohio, estimated to be around 13,000 years old, and an inner ear bone from American Falls, Idaho, which dates back approximately 72,000 years. The modified genetic material was then transferred into an egg cell from a domestic dog. Afterward, the embryos were implanted into surrogate domestic dogs, leading to the birth of the genetically engineered pups 62 days later.

Ben Lamm, CEO of Colossal Biosciences, described the achievement as a significant milestone in the company’s efforts to demonstrate the effectiveness of its de-extinction technology. “It was once said, ‘any sufficiently advanced technology is indistinguishable from magic,’” Lamm stated. “Today, our team gets to unveil some of the magic they are working on and its broader impact on conservation.”

Colossal has previously announced similar projects aimed at genetically altering cells from living species to create animals resembling other extinct species, including woolly mammoths and dodos. In conjunction with the announcement of the dire wolves, the company also revealed the birth of two litters of cloned red wolves, which are critically endangered. This development, according to Colossal, demonstrates the potential of their de-extinction technology to aid in conservation efforts.

In late March, Colossal’s team met with officials from the U.S. Department of the Interior to discuss their projects. Interior Secretary Doug Burgum praised the work on social media, calling it a “thrilling new era of scientific wonder.” However, some scientists remain skeptical about the feasibility of restoring extinct species.

Corey Bradshaw, a professor of global ecology at Flinders University in Australia, expressed doubts regarding Colossal’s claims. “So yes, they have slightly genetically modified wolves, maybe, and that’s probably the best that you’re going to get,” Bradshaw remarked. “And those slight modifications seem to have been derived from retrieved dire wolf material. Does that make it a dire wolf? No. Does it make a slightly modified gray wolf? Yes. And that’s probably about it.”

Colossal Biosciences reports that the newly created wolves are thriving in a 2,000-acre ecological preserve in Texas, which is certified by the American Humane Society and registered with the USDA. Looking ahead, the company aims to restore the species in secure ecological preserves, potentially on indigenous lands.

As the debate continues regarding the ethical implications and scientific validity of de-extinction efforts, the work of Colossal Biosciences represents a bold step into the future of genetic engineering and conservation.

According to Fox News, the implications of such advancements could reshape our understanding of extinct species and their potential return to the ecosystem.

The Rise of Efficient AI: Balancing Energy Needs During the Boom

The emergence of ‘efficient’ or ‘green’ AI is reshaping the technology landscape, as companies strive to reduce energy consumption amid soaring demand for artificial intelligence.

A shift toward “efficient AI” is becoming a crucial competitive metric alongside performance and scalability in the rapidly evolving AI landscape. As the demand for AI technologies surges, companies are racing to develop models that consume significantly less electricity.

Vasudha Badri Paul, CEO of Avatara AI, emphasizes the importance of this trend, stating, “Companies that adopt an energy-first approach for AI are the future.”

As artificial intelligence becomes increasingly integrated into daily life—from search engines to business applications—a pressing concern has emerged regarding the growing energy footprint associated with these technologies. A recent report from TRG Datacenters sheds light on this challenge, revealing that leading AI developers are making strides to enhance the energy efficiency of their models.

Chris Hinkle, CEO of TRG Datacenters, notes the alarming trajectory of AI demand: “The math is simple but scary: AI demand is on track to quadruple by 2030, and our power grids just aren’t built for that speed. We’re hitting a physical wall where we can’t just build more data centers; we have to make the software stop being so ‘hungry.’”

The study conducted by TRG Datacenters examined major language models to assess how companies are saving energy amid the technology’s growth. The findings indicate a clear trend: the latest generation of AI models is becoming significantly more efficient, even as usage continues to rise. Many experts agree that enhancing the energy efficiency of AI systems is as vital as expanding their capabilities, particularly given the exponential growth in global demand.

Among the models analyzed, Grok 4.1 stands out for its efficiency gains, reducing energy consumption by 38 percent compared to its predecessor. Despite processing 134 million daily queries, Grok 4.1 decreased its power requirement from 0.55 watt-hours per query to 0.34. This improvement also lowered the average cost per request from $0.000098 to $0.000061, marking the most significant enhancement recorded in the study. Researchers have hailed it as “the most energy-efficient model in the world today.”

This trend reflects a broader movement within the technology sector toward what experts are calling Green AI, an approach focused on minimizing the environmental impact of large-scale artificial intelligence systems. Sridhar Verose, a council member in San Ramon and a technologist with over two decades of experience in cloud operations and digital transformation, underscores the necessity of this shift. “Green AI is driven by the need to reduce the rapidly growing energy demands of large-scale AI models. A multi-layered approach combines energy-efficient hardware, algorithmic efficiency, and specialized, smaller model architectures,” he explains.

The research also highlights Google’s Gemini 3, which ranks second in energy efficiency, achieving a 35 percent reduction in energy consumption. The model supports an estimated 850 million daily queries while maintaining the lowest cost per request in the ranking at just $0.000043. By cutting its power usage by more than a third, Gemini 3 demonstrates that large-scale AI systems can expand rapidly while keeping operating costs and electricity demand manageable.

Other leading AI systems have also reported significant improvements. Claude Opus 4.5 from Anthropic reduced electricity use by 27 percent while processing around 180 million daily queries. Meanwhile, the China-developed DeepSeek V3.2 improved efficiency by 25 percent while handling approximately 650 million daily queries.

The urgency for energy-efficient AI is escalating as global demand continues to rise. Data centers are already responsible for a growing share of electricity consumption, and the explosive growth of generative AI tools is expected to further accelerate this trend.

Vasudha Badri Paul reiterates the need for aligning AI development with climate considerations. “The need is to align computing with the future of climate by using stranded, wasted energy to power AI workloads. Companies that adopt an energy-first approach for AI are the future,” she asserts.

If the findings from the research are any indication, the coming years could see even more energy-efficient models. Efficiency gains of 30 percent or more from models such as Grok and Gemini signal meaningful progress in the field.

Hinkle also emphasizes that the shift toward efficiency is critical for sustaining the rapid growth of AI. “Seeing models like Grok or Gemini slash their energy use by 30% or more proves that we can actually make these systems smarter without just throwing more juice at them,” he states.

He further illustrates the impact of these efficiency improvements by referencing GPT-5.2, which achieved a 19 percent reduction across 2.5 billion daily hits, equating to enough energy savings to power an entire city for free. “This kind of ‘efficiency-first’ mindset is the only way we keep the lights on while the AI boom continues,” Hinkle concludes.

As the demand for AI technologies continues to rise, the push for energy-efficient solutions will be paramount in ensuring a sustainable future for artificial intelligence.

According to TRG Datacenters.

U.S. Introduces New Regulations for AI Chip Exports

The United States is considering new regulations for exporting artificial intelligence chips, potentially requiring foreign investments in U.S. data centers as a condition for large-scale exports.

The United States is contemplating the introduction of new rules governing the export of artificial intelligence (AI) chips. According to a document reviewed by Reuters, U.S. officials are in discussions about a regulatory framework that may require foreign nations to invest in U.S. AI data centers or provide security guarantees as a prerequisite for exporting 200,000 chips or more.

This initiative marks the first significant attempt to regulate the export of AI chips to U.S. allies and partners since the Trump administration rescinded the previous administration’s AI diffusion rules. Those earlier rules aimed to retain a substantial portion of AI infrastructure development within the U.S. and directed most purchases through a select group of American cloud computing companies.

Saif Khan, a former national security official in the Biden administration and now affiliated with the Institute for Progress, a Washington think tank, commented on the potential impact of the proposed regulations. “The rule could help the U.S. government address chip diversion to China and ensure a more secure buildout of the most powerful AI supercomputers,” he said. “However, the license requirements are overly broad, applying globally, which raises concerns that the administration intends to use these controls as negotiation leverage with allies rather than strictly for security purposes.”

If implemented, this proposal could provide the Trump administration with significant leverage in negotiating investments in the U.S., aligning with one of Trump’s key priorities as it determines the allocation of AI chips to various countries.

The U.S. Commerce Department has expressed its commitment to promoting secure exports of American technology. “We successfully advanced exports through our historic Middle East agreements, and there are ongoing internal government discussions about formalizing that approach,” the department stated.

The potential regulation of AI chip exports reflects a broader shift in the intersection of technology, national security, and economic strategy on the global stage. As AI technology becomes increasingly integral to commercial innovation and geopolitical influence, controlling the distribution of critical hardware serves not only to protect domestic interests but also to shape international partnerships.

Such measures could redefine the balance of power in AI development, encouraging foreign nations to collaborate closely with U.S. infrastructure and security frameworks. This approach aims to ensure that sensitive technology is not diverted in ways that could compromise strategic objectives.

Beyond immediate security concerns, this strategy underscores a growing recognition that advanced technologies are intertwined with economic and diplomatic leverage. By linking chip exports to investments or commitments in U.S.-based infrastructure, the U.S. could establish new standards for how technological ecosystems are developed, maintained, and shared globally.

This regulatory approach may foster more sustainable and accountable global tech development while enhancing the U.S.’s influence in shaping AI norms and safeguards.

The potential changes to AI chip export regulations highlight the evolving landscape of international technology policy, where economic interests and national security considerations increasingly intersect.

As discussions continue, the outcome of these deliberations could have far-reaching implications for the future of AI technology and its role in global economic dynamics, according to Reuters.

AI Uncovers $163K in Fraudulent Medical Bill Charges

A man successfully reduced a hospital bill by over $100,000 using AI tools to identify billing errors, highlighting the potential of technology in managing medical expenses.

In a remarkable case, a man utilized an AI chatbot to significantly reduce a hospital bill following his brother-in-law’s tragic heart attack. The initial bill for just four hours of emergency care totaled an astonishing $195,628. However, before his sister-in-law could pay, he urged her to wait and requested an itemized bill that included CPT codes—the standardized billing codes used by hospitals.

After receiving the itemized bill, he input the information into Claude, an AI chatbot. Within minutes, Claude identified numerous discrepancies, including duplicate charges, services billed as “inpatient” despite the patient never being admitted, and supply costs inflated by 500% to 2,300% above Medicare rates. Additionally, there were charges for procedures that had not occurred. To ensure accuracy, he cross-checked the findings with ChatGPT, which corroborated Claude’s results.

Armed with this information, he drafted a six-page letter detailing each violation. As a result, the hospital agreed to reduce the bill to $33,000, marking an impressive 83% decrease—all achieved without any medical training and with the help of a $20 app.

This story, while extraordinary, is not as isolated as it may seem. The Medical Billing Advocates of America estimates that approximately 75% of medical bills contain errors. On average, hospital bills exceeding $10,000 have around $1,300 in mistakes. Alarmingly, less than 1% of denied insurance claims are ever appealed, indicating that many patients may be unaware of their rights and the potential for errors in their bills.

AI technology is transforming the way patients can approach their medical billing disputes. With AI tools, individuals no longer need an extensive understanding of CPT codes or a background in medical billing to challenge their bills effectively. The process is straightforward:

First, contact your healthcare provider and request an itemized bill that includes CPT codes. It is important to ask for the full line-by-line breakdown rather than a summary, as patients are legally entitled to this information.

Next, open an AI tool such as ChatGPT, Claude, Grok, or Gemini (free versions are available) and paste the following request:

“I’m pasting my itemized medical bill below. Please: (1) Explain every charge in plain English, (2) Flag any duplicate or suspicious charges, (3) Compare each charge to average costs, (4) Identify billing code errors or bundling violations, and (5) Draft a dispute letter I can send to the billing department. Here’s my bill:”

After pasting your bill, the AI will analyze each line and highlight any discrepancies or errors it identifies.

If the AI uncovers mistakes—something that is likely—contact the billing department and ask to speak with a supervisor. Be sure to reference the specific codes and findings from your AI analysis. Hospitals are often willing to resolve disputes when patients come prepared with detailed information.

For those looking for additional resources, Counterforce Health (counterforcehealth.org) is a free AI tool specifically designed to assist with insurance denial appeals and is worth bookmarking for future reference.

As the landscape of healthcare billing continues to evolve, it is crucial for patients to take a proactive approach in reviewing their medical bills. Utilizing AI tools can empower individuals to challenge inaccuracies and potentially save significant amounts of money.

In a world where discussions about AI are prevalent, practical applications like this demonstrate how technology can be harnessed to address real-life challenges. For those seeking further insights into leveraging AI effectively, consider subscribing to the free newsletter, Splash of AI, which offers weekly tips and tools designed to simplify the use of technology in everyday life.

Sharing this information with someone who is grappling with a confusing medical bill could lead to substantial savings. It takes less time than brewing a cup of coffee and could save hundreds or even thousands of dollars.

Kim Komando, a trusted voice in technology, provides straightforward advice without the jargon. Her national radio show, available on over 500 stations, along with a free daily newsletter, YouTube content, and podcasts, offers valuable insights for navigating the tech landscape.

For more information, visit Komando.com.

According to Fox News, the integration of AI in managing medical bills is becoming an essential tool for patients seeking to rectify billing errors.

Shreya Parchure Uses AI to Aid Stroke Survivors in Speech Recovery

Shreya Parchure, an Indian American doctoral student, is pioneering an AI tool to personalize speech therapy for stroke survivors, enhancing recovery prospects for those affected by post-stroke aphasia.

Shreya Parchure, an Indian American researcher and doctoral student at the University of Pennsylvania, is making significant strides in the field of speech therapy for stroke survivors. Her innovative approach utilizes artificial intelligence (AI) to personalize treatment for individuals suffering from post-stroke aphasia, a condition that impairs the ability to understand or produce speech and affects approximately one-third of stroke survivors.

Growing up across two continents, Parchure developed a deep appreciation for the importance of language in enhancing quality of life. Her clinical rotations in a neurocritical care unit further solidified her commitment to advancing research and care for patients with aphasia. During her interactions with patients, she witnessed firsthand the profound impact that speech therapy can have on recovery. One patient, who initially struggled to speak, gradually regained her ability to communicate through dedicated therapy. “She was overjoyed,” Parchure recalls, highlighting how progress in speech therapy can instill hope in patients.

Traditional speech therapies for post-stroke aphasia often follow standardized protocols. However, Parchure and her team at the Laboratory for Cognition and Neural Stimulation (LCNS) are exploring the potential of “explainable AI.” This set of machine learning methods focuses on providing clear rationales behind AI-generated results, enabling healthcare providers to interpret and trust the recommendations made by the technology.

While some AI models have utilized neuroimaging and the duration since a stroke to assess aphasia severity, Parchure’s research expands on these methods by incorporating how language is formed and processed in the brain. “Explainable AI can integrate clinically available data—such as age, education, or the size of a stroke—with the linguistic difficulty of words,” she explains. This multifaceted approach allows the AI model to predict recovery timelines and suggest tailored treatments based on individual patient circumstances.

“When we have an AI making a prediction, we really want to know why,” Parchure emphasizes. She has leveraged speech samples from patients with post-stroke aphasia to train an explainable AI algorithm, testing its ability to account for various language tasks and make recovery predictions based on a diverse array of clinically relevant information. The tool also considers personal attributes, such as the size of the stroke and the level of social support available to the patient.

“Incorporating language into the fold adds a new layer of considering human and brain complexity,” Parchure notes. The explainable AI tool can predict speech performance on a word-by-word basis, which can help clinicians identify the underlying factors affecting a patient’s speech abilities. This granularity informs more nuanced treatment plans and recovery predictions.

“It’ll help tailor speech therapy for where exactly people are having trouble,” Parchure states. “We can really meet patients where they are in a more personalized manner.” To facilitate this, Parchure and her colleagues have developed an AI-powered application for use in both clinical and research settings. A particularly innovative aspect of this research is the creation of a “digital twin” for each patient, which serves as a predictive tool for language recovery.

The simulated “twin” allows for a comparative analysis of how a patient may respond to different treatments, enhancing the efficiency of clinical trials by enabling researchers to compare projected outcomes with actual recovery results. “The goal of my MD-PhD training has been to translate advances in research in a way that will benefit patients,” Parchure explains. Her work has already garnered recognition, including the Best Poster award in Translational Research at the 2025 PSOM Student Research Symposium.

Looking ahead, Parchure envisions a future where AI plays a crucial role in personalizing speech therapy, ultimately helping stroke survivors with aphasia reconnect with the joy of language. “Over the next decade, I believe we will see significant advancements in this area,” she concludes.

According to Penn Today, Parchure’s research represents a promising development in the intersection of technology and healthcare, offering hope to countless individuals affected by stroke.

Google Uses AI to Decode Dolphin Communication

Google is leveraging artificial intelligence to decode dolphin communication, aiming to facilitate human interaction with these intelligent marine mammals.

Google is embarking on an ambitious project to harness artificial intelligence (AI) in order to decode the complex communication of dolphins, with the ultimate goal of enabling humans to converse with these intelligent creatures.

Dolphins have long been celebrated for their remarkable intelligence, emotional depth, and social interactions with humans. In collaboration with researchers from the Georgia Institute of Technology and the Wild Dolphin Project (WDP), a Florida-based non-profit that has dedicated over 40 years to studying and recording dolphin sounds, Google is developing a new AI model named DolphinGemma.

The WDP has been instrumental in correlating different types of dolphin sounds with specific behavioral contexts. For example, signature whistles are often used by mothers to reunite with their calves, while burst pulse “squawks” are typically observed during aggressive encounters among dolphins. Additionally, “click” sounds are frequently employed during courtship or when dolphins are pursuing sharks.

Utilizing the extensive data collected by the WDP, Google has created DolphinGemma, which builds upon its existing lightweight AI model known as Gemma. This innovative model is designed to analyze the vast library of dolphin vocalizations, detecting patterns, structures, and potential meanings behind their communications.

Over time, DolphinGemma aims to categorize dolphin sounds in a manner akin to human language, organizing them into what could resemble words, sentences, or expressions. According to a blog post by Google, “By identifying recurring sound patterns, clusters, and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins’ natural communication—a task previously requiring immense human effort.”

The project also envisions the creation of a shared vocabulary between dolphins and humans. By augmenting the identified sound patterns with synthetic sounds that refer to objects dolphins enjoy, researchers hope to establish a basis for interactive communication.

DolphinGemma employs advanced audio recording technology from Google’s Pixel phones, which enables the capture of high-quality sound recordings of dolphin vocalizations. This technology is capable of filtering out background noise, such as waves, boat engines, and underwater static, ensuring that the AI model receives clear audio data. Researchers emphasize that clean recordings are crucial for the effectiveness of AI models like DolphinGemma, as noisy data can lead to confusion.

Google plans to release DolphinGemma as an open model this summer, allowing researchers worldwide to utilize and adapt it for their own studies. Although the model has been primarily trained on Atlantic spotted dolphins, it has the potential to assist in the study of other species, such as bottlenose or spinner dolphins, with some adjustments.

In the words of Google, “By providing tools like DolphinGemma, we hope to give researchers worldwide the means to mine their own acoustic datasets, accelerate the search for patterns, and collectively deepen our understanding of these intelligent marine mammals.”

As this groundbreaking project unfolds, it holds the promise of not only enhancing our understanding of dolphin communication but also fostering a deeper connection between humans and these remarkable creatures.

According to Google, the advancements made through DolphinGemma could pave the way for unprecedented interactions with dolphins, enriching both scientific knowledge and human experience.

Indian-American Researchers Launch AI Legislation Tracking Portal

Researchers at Brown University, led by Indian American professor Suresh Venkatasubramanian, have launched a portal to track and analyze pending AI legislation across the United States.

A team of researchers from Brown University, under the leadership of Indian American professor Suresh Venkatasubramanian, has unveiled a new tool designed to track and analyze pending artificial intelligence (AI) legislation at both the federal and state levels in the United States. This initiative aims to address the rapidly evolving landscape of AI technologies and their regulation.

The CNTR AISLE Portal serves as a public database that aggregates information on AI legislation currently pending across all 50 states and at the federal level. It also provides in-depth analyses conducted by trained evaluators, detailing the various aspects of AI policy that these bills encompass.

Developed by a collaborative team of faculty, students, and staff at the Center for Technological Responsibility, Reimagination and Redesign (CNTR), the portal is a significant step toward enhancing public understanding of AI legislation. Venkatasubramanian, who is a professor of computer science and data science at Brown, emphasized the importance of this tool in the context of the growing number of AI-related bills introduced in the U.S. “Over the last three years, over 1,000 AI-related bills have been introduced in the U.S.,” the AISLE team noted at the launch. “With AISLE, we will help the public, journalists, researchers, and policymakers identify key policy trends and assess the maturity of these proposals.”

The AISLE Portal features a comprehensive bill library that compiles all AI-related legislation from a larger legislative database known as LegiScan. A subset of these bills has been evaluated by the AISLE policy team, which consists of 17 undergraduate students and five graduate students trained to assess legislation using the AISLE framework.

This framework includes a set of 159 questions designed to evaluate the extent to which each bill pertains to six general categories: accountability and transparency, data protection, bias and discrimination, education, synthetic content, and the labor force. For each bill assessed, the portal provides a “bill profile” that summarizes its content according to the AISLE framework.

Venkatasubramanian highlighted the team’s commitment to developing objective standards for evaluating legislation. “The goal here is not for us to say which bills we think are good and which ones are bad,” he explained. “Instead, we want to provide an easily digestible format for people to see what kinds of topics each bill covers and better understand where policymakers are in terms of addressing developments in AI.”

As of now, the team has evaluated approximately 100 bills, with plans to continue adding analyses on a rolling basis. Their ultimate goal is to evaluate enough legislation to identify large-scale trends in AI governance and legislation.

“With the analysis data that AISLE has provided, it is possible to understand which topics come in and out of the spotlight in each year’s legislative session, such as the rise in attention paid to the consequences of AI-generated synthetic content,” Venkatasubramanian noted. “We were also able to analyze similarities between bills to understand how ideas spread and diffuse across different states, and how ‘template’ bills influence how legislators draft legislation.”

The CNTR AISLE project is still in its early stages, with plans to introduce new features to the portal in the coming weeks. As legislative sessions for 2026 commence across the country, the team hopes that the portal will prove beneficial to a diverse range of users, including policymakers, journalists, and the general public.

“When we started work on AISLE, we hoped that the system we were building would be useful to policymakers, the press, and the public,” Venkatasubramanian said. “But as our team has grown, and as the work has developed, I’ve come to realize how invaluable AISLE is as an educational experience for the many students in technical and non-technical disciplines interested in AI policy. It has also become clear that AISLE lays the foundation for long-term scholarly research on how efforts to shape this critical and transformative technology are evolving over time.”

Venkatasubramanian has an impressive background, having served as the Assistant Director for Science and Justice in the White House Office of Science and Technology Policy during the Biden-Harris administration, where he co-authored the Blueprint for an AI Bill of Rights. He has also received several accolades for his research, including a CAREER award from the National Science Foundation for his work in the geometry of probability, a test-of-time award at ICDE 2017 for his contributions to privacy, and a KAIS Journal award for his work on auditing black-box models.

As the CNTR AISLE project continues to evolve, it promises to be a vital resource in understanding the legislative landscape surrounding AI technologies in the United States, fostering informed discussions and decisions about the future of AI policy.

According to The American Bazaar, the launch of the AISLE Portal marks a significant advancement in the effort to track and analyze AI legislation nationwide.

Data Breach at Figure Exposes Nearly One Million Accounts

Nearly 1 million accounts were compromised in a data breach at Figure Technology Solutions, exposing sensitive personal information due to a social engineering attack.

In a significant data breach, hackers have exposed personal information from 967,200 accounts at Figure Technology Solutions, a blockchain-focused fintech lender. The compromised data includes names, addresses, email addresses, and dates of birth.

For those who have applied for a loan online, the reality of sharing personal information can be alarming. Your name, email, date of birth, and even your home address may now be circulating on dark web forums. This is the unfortunate situation for nearly 1 million individuals following the breach at Figure Technology Solutions, which was founded in 2018 and utilizes the Provenance blockchain for lending, borrowing, and securities trading.

Figure claims to have unlocked over $22 billion in home equity through partnerships with banks, credit unions, fintechs, and home improvement companies. However, behind the scenes, a different story unfolded as attackers executed a social engineering attack to gain access to sensitive data.

According to breach notification data shared by Have I Been Pwned, the leaked information includes more than 900,000 unique email addresses, along with names, phone numbers, physical addresses, and dates of birth. This trove of personal data presents a significant opportunity for identity thieves.

A spokesperson for Figure Technology Solutions explained that the breach resulted from an employee being socially engineered into providing access. “We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,” the spokesperson stated. “We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate. We are also implementing additional safeguards and training to further strengthen our defenses. We are offering complimentary credit monitoring to all individuals who receive a notice. We continuously monitor accounts and have strong safeguards in place to protect customers’ funds and accounts.”

While blockchain technology is often associated with security and invulnerability, this incident underscores that attackers can exploit human vulnerabilities rather than breaking through cryptographic defenses. Groups like ShinyHunters have been linked to this breach, reportedly claiming responsibility and posting 2.5GB of data tied to thousands of loan applicants on the dark web.

In recent weeks, ShinyHunters has also claimed responsibility for breaches involving other companies, including Canada Goose, Panera Bread, and SoundCloud. Although not every case is connected, security researchers have noted a concerning trend where attackers impersonate IT support, create urgency, and direct employees to fake login portals that closely resemble legitimate ones. Once employees enter their credentials, including multi-factor authentication codes, attackers can gain access to single sign-on systems linked to major platforms like Microsoft and Google. This can lead to a cascade of compromised accounts and internal systems.

The implications of the Figure data breach are significant. If your information was part of the breach, criminals now possess enough detail to craft convincing phishing emails or phone scams. They can reference your real name and address, potentially impersonating a lender or bank regarding your application.

Even if you have never applied for a loan with Figure, this incident highlights a broader issue: no platform is immune to human error. Social engineering works by targeting trust rather than technology. While Figure promotes itself as a blockchain-native company, the reality is that blockchain technology does not protect against well-crafted phone calls or social manipulation.

As financial services increasingly move online, the attack surface for potential breaches expands. Loan applications, identity verification tools, and cloud-based systems offer convenience but also create new vulnerabilities.

To protect yourself following the Figure data breach, it is essential to take proactive steps. While you cannot control how companies secure their systems, you can manage your response. Start by checking whether your email address appears in the exposed dataset. You can do this by visiting Have I Been Pwned and entering your email address to see if your information has been compromised.

Additionally, be cautious of unexpected calls regarding your accounts. If someone pressures you to act immediately, it is advisable to hang up and contact the company directly using a number from its official website.

The Figure data breach serves as a stark reminder that technology alone cannot safeguard sensitive information. A single employee tricked into revealing credentials can expose hundreds of thousands of individuals. This incident is not a failure of blockchain technology but rather a failure of trust.

If your data was involved in the breach, it is crucial to take action now. Even if it was not, this incident should serve as a wake-up call. Your personal information holds significant value, and criminals are aware of this. Companies must also recognize the importance of investing in employee training and security measures to prevent such breaches in the future.

As we navigate an increasingly digital landscape, the question remains: are companies doing enough to protect sensitive information, or are they relying too heavily on technology alone? This breach raises critical concerns about the adequacy of current security practices and the need for a more comprehensive approach to safeguarding personal data.

For further insights and updates on cybersecurity, visit CyberGuy.

US Supreme Court Declines Review of AI-Generated Art Copyright Case

The U.S. Supreme Court has opted not to address the copyright eligibility of art created by artificial intelligence, leaving lower court decisions intact.

The U.S. Supreme Court declined on Monday to consider whether art generated by artificial intelligence (AI) can be copyrighted under U.S. law. This decision comes in response to a case involving Stephen Thaler, a computer scientist from Missouri, who was denied copyright protection for a piece of visual art created by his AI technology.

Thaler had approached the Supreme Court after lower courts upheld a ruling from the U.S. Copyright Office, which stated that works produced by AI are ineligible for copyright protection due to the absence of a human creator. Thaler, based in St. Charles, Missouri, applied for federal copyright registration in 2018 for his artwork titled “A Recent Entrance to Paradise.” The piece depicts train tracks leading into a portal, surrounded by vibrant green and purple plant imagery.

In 2022, Thaler’s application was rejected on the grounds that copyright law requires a human author for creative works. The Supreme Court’s refusal to hear the case means that this decision remains in effect.

The Trump administration had previously urged the Supreme Court not to take up Thaler’s appeal. The Copyright Office has also denied copyright requests from other artists seeking protection for images generated with the AI platform Midjourney. Unlike Thaler, these artists claimed they deserved copyright for images they created with AI assistance, while Thaler argued that his AI system independently generated “A Recent Entrance to Paradise.”

A federal judge in Washington upheld the Copyright Office’s decision in Thaler’s case in 2023, emphasizing that human authorship is a fundamental requirement for copyright eligibility. This ruling was later affirmed by the U.S. Court of Appeals for the District of Columbia Circuit in 2025.

Thaler’s legal team expressed concern over the implications of the Copyright Office’s stance, stating, “Even if it later overturns the Copyright Office’s test in another case, it will be too late. The Copyright Office will have irreversibly and negatively impacted AI development and use in the creative industry during critically important years.”

The administration reiterated its position, noting that while the Copyright Act does not explicitly define the term “author,” various provisions indicate that it refers to a human rather than a machine.

This is not the first time the Supreme Court has declined to address issues surrounding AI and intellectual property. Thaler previously sought the Court’s intervention in a separate case regarding whether AI-generated inventions could qualify for U.S. patent protection. His patent applications were similarly rejected by the U.S. Patent and Trademark Office on grounds consistent with those applied to his copyright claims.

The Supreme Court’s decision not to engage with the complexities of AI-generated art and its copyright implications leaves significant questions unanswered, particularly as AI technology continues to evolve and permeate various creative fields.

As the debate over AI and intellectual property rights continues, the implications of these rulings may have lasting effects on artists, technologists, and the broader creative industry.

According to The American Bazaar, the Supreme Court’s decision underscores the ongoing challenges faced by creators and innovators in navigating the intersection of technology and copyright law.

Iranian Networks Experience Disruptions Amid Airstrikes, Highlighting Digital Conflict Evolution

A recent cyberattack during airstrikes on Iran underscores the increasing importance of digital warfare in modern conflicts, revealing vulnerabilities in global networks and offering critical cybersecurity lessons.

A significant cyberattack coincided with airstrikes on Iran, illustrating the evolving nature of warfare where digital conflicts play a crucial role. On February 28, 2026, during Operation Roar of the Lion, fighter jets and cruise missiles targeted Iranian Revolutionary Guard command centers. Simultaneously, a parallel cyber offensive reportedly unfolded, resulting in widespread disruptions across the nation.

As missiles rained down, Iran experienced a near-total digital blackout. Key media platforms and official news sites went offline, while government digital services and local applications failed in major cities. According to NetBlocks, a global internet monitoring organization, internet traffic in Iran plummeted to just 4 percent of normal levels, indicating either a state-ordered shutdown or a large-scale cyberattack aimed at crippling critical infrastructure.

Western intelligence sources later suggested that the cyber offensive was designed to disrupt the command and control systems of the Islamic Revolutionary Guard Corps (IRGC) and hinder their ability to coordinate counterattacks. This incident serves as a stark reminder that modern warfare increasingly intertwines airstrikes with digital assaults, creating repercussions that extend far beyond the battlefield.

Reports indicated widespread outages throughout Iran, with major news outlets such as the state-run IRNA going offline. Tasnim, a semi-official news agency aligned with the IRGC, even displayed subversive messages targeting Supreme Leader Ali Khamenei. The IRGC, which plays a pivotal role in Iran’s national security and regional operations, faced significant operational challenges as local apps and government services failed in cities like Tehran, Isfahan, and Shiraz.

This was not merely a case of a single website being defaced; the attack appeared systemic. Electronic warfare reportedly disrupted navigation and communication systems, while distributed denial of service (DDoS) attacks overwhelmed networks with excessive traffic, rendering them inoperable. Deep intrusions targeted critical sectors such as energy and aviation, further exacerbating the crisis. Even Iran’s isolated national internet struggled under the pressure.

For a regime that tightly controls information, losing digital command poses both operational and political risks. Cyber operations can achieve objectives without the immediate loss of life, allowing for disruption without triggering full-scale war—a vital consideration in a region where escalation can occur rapidly. Historically, Iran has demonstrated an understanding of this strategy, having previously targeted U.S. financial institutions and Saudi Aramco in cyberattacks between 2012 and 2014.

Following Israeli strikes in 2025, cyberattacks targeting Israel surged dramatically within days. Cyber retaliation provides leaders with a means to respond while minimizing direct military confrontation, thereby gaining leverage in negotiations without crossing critical thresholds.

However, there is a significant risk involved. Each cyber strike carries the potential for miscalculation, and damage to critical infrastructure can quickly escalate into real-world consequences. If the recent blackout and airstrikes mark a turning point, Tehran has several options, none of which are straightforward. Cyber retaliation remains one of Iran’s most adaptable tools, ranging from disruptive attacks to influence campaigns that pressure critical services.

Experts warn that U.S. cyber defenses and the private sector may face sustained challenges in the wake of these events. Iran has previously utilized drones and electronic interference as signals, with analysts noting the potential for jamming, spoofing, and harassment of unmanned systems to raise costs without directly targeting personnel.

The risks are escalating. An official from an EU naval mission reported that IRGC radio transmissions warned ships against passage through the Strait of Hormuz. Greece has advised vessels to avoid high-risk routes, citing concerns about electronic interference that could disrupt navigation. Insurers are already adjusting their policies, with reports of war-risk coverage being canceled or significantly increased.

Iran has historically collaborated with allied forces and militias in the region, and some of these groups may escalate attacks on U.S. interests or allied partners in retaliation, further widening the conflict without direct state-to-state engagement. While missile strikes remain a high-impact option, they also increase the likelihood of rapid escalation. Recent analyses suggest that Iran may use missile strikes as a signaling tool, particularly if its leadership feels cornered.

The uncomfortable reality is that neither Washington nor Tehran likely desires a full-scale regional war. In such moments, military strikes rarely occur in isolation; they are often accompanied by diplomatic efforts. Leaders send signals, apply pressure, and attempt to leave room for negotiations. However, escalation can gain momentum quickly. Each missile fired alters the equation, and each casualty raises the stakes, making it increasingly difficult to de-escalate.

Fear and pride play significant roles in these dynamics, as domestic audiences demand displays of strength. This pressure can lead to limited strikes spiraling into larger conflicts. The recent events highlight a broader trend: nation-states are increasingly pairing kinetic strikes with digital offensives. Cyberattacks can blind communications, freeze infrastructure, and disrupt financial systems long before the first explosion is registered.

This reality is crucial for businesses and individuals alike. Modern conflicts do not remain confined to battlefields; supply chains, energy grids, and online platforms can all feel the ripple effects. The blackout in Iran serves as a reminder that digital resilience has become a national security issue. When a country’s internet can drop to just 4 percent of normal traffic within hours, it underscores the rapid escalation potential of cyber conflicts. Even disruptions occurring overseas can have far-reaching consequences for interconnected global networks.

While geopolitics may be beyond individual control, personal digital hygiene can be managed. Practical steps to reduce risk during heightened cyber activity include installing strong antivirus software, keeping devices updated, using unique passwords stored in reputable password managers, enabling two-factor authentication, and being cautious with urgent headlines or alerts about international conflicts.

The reported cyber blackout in Iran may signal a new chapter in modern conflict. While jets and missiles remain significant, the importance of servers, satellites, and code cannot be overlooked. Leaders may attempt to contain damage while demonstrating strength, but history shows how quickly plans can unravel under pressure. Today, warfare operates on electricity and bandwidth as much as it does on fuel and ammunition. When networks go dark, the repercussions extend far beyond the battlefield, affecting banking systems, airports, hospitals, and personal devices.

This moment serves as a crucial reminder: if an entire nation’s digital systems can be disrupted in hours, how prepared is your community for a similar event? The implications of these developments are profound and warrant careful consideration.

According to Source Name.

Google Discontinues Dark Web Monitoring Service: What You Need to Know

Google has discontinued its Dark Web Report feature, which previously scanned for personal information breaches, leaving users to rely on alternative security tools for monitoring their data exposure.

Google has officially discontinued its Dark Web Report feature, a free service that once scanned known dark web breach dumps for personal information associated with users’ Google accounts. This tool provided notifications when email addresses and other identifiers appeared in leaked datasets.

According to Google’s support page, the dark web scanning ceased on January 15, 2026, with the reporting function removed entirely on February 16, 2026. As a result, users can no longer access this feature. The company stated that this decision reflects a shift toward security tools that offer clearer guidance after exposure, rather than standalone scan alerts.

For those who previously relied on the dark web scan as an early warning system for leaked data, this change removes a significant source of information. The Dark Web Report functioned as a basic exposure scanner, checking whether personal information linked to a Google account had surfaced in known breach collections circulating on the dark web.

When a match was found, users received a notification detailing the type of data that appeared in a leak. This could include an email address, phone number, date of birth, or other identifying details commonly harvested during large-scale hacks. However, the report did not display stolen credentials or provide access to the leaked database itself, nor did it trace the origin of the compromise beyond referencing the breached service when available.

After receiving an alert, users were responsible for taking the next steps. Google recommended actions such as changing passwords, enabling stronger authentication methods, and reviewing account security settings. With the removal of the tool, the automated breach check tied directly to a Google account is no longer available.

Google now directs users to its Security Checkup, a dashboard that scans accounts for weak settings and unusual sign-in activity. Additionally, its built-in Password Manager includes a Password Checkup feature that scans saved credentials against known breach databases and prompts users to change exposed passwords. Google also supports passkeys and two-factor verification to enhance account security.

The Results About You tool allows users to search for personal information in Google Search and submit removal requests for certain publicly indexed details. However, once personal information is compromised, it often ends up far beyond the initial breach. Stolen credentials and identity data are regularly trafficked on underground platforms where buyers can search for information tied to real individuals.

The BidenCash dark web marketplace was taken down by U.S. authorities in June 2025, with the Justice Department confirming that the platform sold stolen personal information and credit card data. These illicit markets operate with a level of organization comparable to legitimate online stores, offering search tools and bulk data sets that can be used to target online accounts. This makes credential stuffing easier, as attackers test leaked passwords across multiple services to gain unauthorized access.

A breach alert tied to a dark web scan indicates a leak at a specific moment in time; it does not track whether that information has been sold to third parties or used in subsequent fraud attempts. For everyday users, this means that simply knowing their data appeared in a leak does not provide much actionable insight.

With Google’s dark web scan now discontinued, some individuals may consider dedicated identity protection services. Many of these services offer continuous monitoring of personally identifiable information and send alerts about changes to credit reports from all three major U.S. credit bureaus. This can include notifications about new inquiries, newly opened accounts, and monthly credit score updates.

Beyond credit monitoring, certain services track linked bank, credit card, and investment accounts for unusual activity. They may also monitor public records for changes to addresses or property titles and alert users if their information appears in those filings. Many providers include identity theft insurance to help cover eligible out-of-pocket recovery costs, with coverage limits varying by plan and provider.

While no service can prevent every form of identity theft, ongoing monitoring and recovery support can facilitate a quicker response if personal information is misused. Google’s decision to drop its Dark Web Report may seem minor, but it eliminates a tool that many users relied on for early warnings about data breaches. Although Google continues to offer Security Checkup, Password Checkup, passkeys, and two-step verification, none of these actively scan dark web breach dumps for users.

Stolen data does not simply vanish; criminals copy, sell, and reuse it. An alert may indicate a single moment of exposure, but ongoing identity theft monitoring is essential for maintaining awareness over time. With the removal of Google’s dark web monitoring feature, users must now decide whether to actively check their data exposure or assume that someone else is monitoring it for them.

For more insights on identity protection and security, visit CyberGuy.com.

Ex-Twitter CEO’s Firm Block Plans to Cut Workforce by Nearly 50% with AI

Jack Dorsey’s company Block plans to lay off 4,000 employees, nearly half of its workforce, citing increased productivity from artificial intelligence tools.

Block, the financial technology company founded by former Twitter CEO Jack Dorsey, has announced plans to lay off 4,000 of its 10,000 employees. This decision is attributed to advancements in artificial intelligence (AI) that have significantly enhanced productivity within the company.

In a letter to shareholders on Thursday, Dorsey emphasized the transformative impact of AI on business operations. “Intelligence tools have changed what it means to build and run a company,” he stated. “We’re already seeing it internally. A significantly smaller team, using the tools we’re building, can do more and do it better. And intelligence tool capabilities are compounding faster every week.”

Despite the substantial layoffs, Dorsey assured stakeholders that the decision was not a reflection of financial instability. He pointed out that Block had performed well, exceeding Wall Street expectations with a reported total revenue of $6.25 billion for the fourth quarter. In a post on X, he explained that he faced two options: to gradually reduce the workforce over an extended period or to act decisively in the present.

“Repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead,” Dorsey wrote.

During the earnings call, executives noted that Block had been increasingly integrating AI into its operations for several years. They indicated that some AI initiatives were nearing full implementation, while others were still in earlier stages of development. This announcement follows a previous round of layoffs earlier in February, which had already seen hundreds of workers let go.

The decision to reduce the workforce by nearly half has drawn comparisons to the drastic measures taken by Elon Musk when he acquired Twitter (now X) in November 2022, where he cut approximately 50% of the staff in a single move. Dorsey, a co-founder of Twitter, has had a complex relationship with Musk, initially supporting his acquisition but later suggesting that Musk “should have walked away.”

In addition to his role at Block, Dorsey has been involved in the development of Bluesky, a decentralized alternative to Twitter, and has expressed strong support for Bitcoin.

The layoffs at Block have reignited discussions about the broader implications of AI on employment. Tech leaders, including Anthropic CEO Dario Amodei and Meta CEO Mark Zuckerberg, have raised concerns about the potential negative effects of AI on the workforce. A recent report from the research firm Citrini, released on February 22, outlined a scenario where the growth of AI could adversely affect the overall economy.

Conversely, some industry figures have cautioned against hastily attributing layoffs to AI. OpenAI CEO Sam Altman has pointed out that some companies may be “AI washing,” or misleadingly linking unrelated layoffs to advancements in AI technology.

Critics on X have challenged Dorsey’s narrative regarding the layoffs at Block. One user highlighted that the company’s workforce had more than tripled from 3,900 to 12,500 employees between December 2019 and December 2022, during the tech boom fueled by the pandemic. “Unwinding less than half an insane COVID overhiring binge has much more to do with Jack Dorsey’s managerial incompetence than whether AI is going to take your job,” the post read.

Another commenter suggested that Block had created “two parallel company structures during COVID” and was now consolidating them, framing the layoffs as a management correction rather than a revolutionary shift driven by AI. This user predicted that more companies might use “AI restructuring” as a pretext for decisions that were already in the works.

The developments at Block reflect ongoing tensions in the tech industry regarding the role of AI in shaping the future of work and the management strategies employed by companies navigating these changes. As the conversation continues, the implications for employees and the economy remain a focal point of concern.

According to The American Bazaar, the situation at Block serves as a critical case study in the evolving landscape of technology and employment.

Amazon Discontinues Development of Blue Jay Warehouse Robot

Amazon has discontinued its Blue Jay warehouse robot program, raising questions about the scalability of advanced robotics in logistics.

Amazon has quietly ended its Blue Jay warehouse robot program just months after its initial unveiling, which aimed to enhance same-day delivery capabilities. The multi-armed, ceiling-mounted robot was introduced in October as a significant advancement in warehouse automation.

Despite the initial excitement surrounding Blue Jay, the program faced considerable challenges that ultimately led to its discontinuation. While the core technology behind Blue Jay will be integrated into other projects, the robot itself will no longer be developed.

This abrupt decision prompts a critical inquiry: If Amazon, one of the world’s leading logistics companies, cannot successfully implement a high-profile robot at scale, what implications does this have for the future of artificial intelligence (AI) in practical applications?

Blue Jay was not merely an upgrade to existing conveyor belt systems; it was designed to recognize and sort multiple packages simultaneously using advanced AI-powered perception models. Amazon claimed that the system was developed in under a year, a remarkable feat aimed at increasing package throughput while alleviating worker strain in fulfillment centers.

However, despite its promising design, Blue Jay encountered significant engineering and cost hurdles. The robot’s ceiling-mounted configuration required intricate installation and seamless integration into Amazon’s Local Vending Machine warehouses, which are designed as expansive, automated structures. This rigidity in design likely became a liability, as modifications would necessitate extensive reconfiguration of hardware and infrastructure, a process that is both time-consuming and costly.

As a result, several employees who were involved in the Blue Jay project have transitioned to other robotics initiatives within the company. Although the Blue Jay robot itself has been shelved, Amazon continues to explore new avenues for improving its warehouse systems, with the underlying technology informing future designs.

Looking ahead, Amazon is shifting its focus to a new warehouse architecture known as Orbital. Unlike the older Local Vending Machine model, Orbital is modular, allowing for quicker deployment in various layouts. This adaptability is crucial as retail landscapes evolve, with customers increasingly expecting same-day delivery from urban centers, local stores, and grocery outlets.

Orbital could enable Amazon to establish micro-fulfillment centers in proximity to retail locations, including Whole Foods, thereby enhancing its competitive edge against rivals like Walmart, which already boasts a robust grocery network.

In conjunction with Orbital, Amazon is also developing a new robotics system called Flex Cell. Unlike Blue Jay’s ceiling-mounted design, Flex Cell will operate on the floor, indicating a strategic shift towards smaller, more flexible automation solutions tailored to the unpredictable nature of local retail environments.

For regular Amazon customers, the immediate impact of these changes may be minimal, as same-day and next-day delivery options remain a priority. However, the long-term implications of Amazon’s evolving robotics strategy could significantly influence order fulfillment speed, pricing, and the operational dynamics of local warehouses.

If Orbital proves successful, it could facilitate faster and more efficient deliveries. Conversely, if it encounters difficulties, the expansion of same-day delivery services could slow down or become more costly. This scenario underscores a broader truth about AI: while software can adapt rapidly through code updates, physical robots face challenges that require substantial investment and time to overcome.

The discontinuation of Blue Jay highlights a growing divide in the tech industry. While software-based AI is advancing at a remarkable pace, hardware development remains fraught with complexities. Robots must navigate real-world challenges such as gravity, friction, and unpredictable human interactions, where each error carries tangible costs.

Amazon’s decision to shelve Blue Jay does not signify a retreat from robotics; rather, it represents a recalibration of its approach. The company is betting on the success of modular, flexible systems over large, integrated machines. This strategic pivot could shape the future of e-commerce logistics.

Ultimately, the promise of faster delivery, improved availability, and enhanced local convenience remains intact for consumers. However, the journey to realize these ambitions involves navigating the intricate balance between AI aspirations and the constraints of physical reality.

As Amazon grapples with the challenges of implementing advanced robotics at scale, it raises an important question: How much of the AI revolution is still more vision than reality? This ongoing dialogue will shape the future of technology and logistics in the years to come, according to CyberGuy.

Researchers Create E-Tattoo to Monitor Mental Workload in High-Stress Jobs

Researchers have developed a face-mounted electronic tattoo, or “e-tattoo,” designed to monitor mental workload in high-stress professions using advanced brainwave technology.

In an innovative study published in the journal Device, scientists have introduced a groundbreaking electronic tattoo device, referred to as an “e-tattoo,” that can help individuals in high-pressure work environments monitor their brain activity and cognitive performance.

The research team, led by Dr. Nanshu Lu from the University of Texas at Austin, emphasizes that mental workload is a crucial element in human-in-the-loop systems, significantly affecting cognitive performance and decision-making processes. This device aims to provide a more cost-effective and user-friendly method for tracking mental workload, particularly in demanding fields such as aviation, healthcare, and emergency response.

Dr. Lu noted that the e-tattoo could be particularly beneficial for professionals like pilots, air traffic controllers, doctors, and emergency dispatchers, who often operate under intense stress. Additionally, the technology could enhance training and performance for emergency room doctors and operators of robots and drones.

The primary objective of the study was to develop a means of measuring cognitive fatigue among individuals in high-stakes careers. The e-tattoo is designed to be temporarily affixed to the forehead and is significantly smaller than existing monitoring devices.

Utilizing electroencephalogram (EEG) and electrooculogram (EOG) technologies, the e-tattoo measures both brain waves and eye movements. Traditional EEG and EOG equipment tends to be bulky and expensive, but the e-tattoo presents a compact and affordable alternative.

Dr. Lu explained that the device is designed to be as thin and flexible as a temporary tattoo sticker, allowing for comfortable wear while providing accurate readings. She stated, “Human mental workload is a crucial factor in the fields of human-machine interaction and ergonomics due to its direct impact on human cognitive performance.”

The study involved six participants who were tasked with identifying letters displayed on a screen. Each letter appeared sequentially at various locations, and participants were instructed to click a mouse when they recognized either the letter or its position from a previously shown set. The difficulty of the tasks increased progressively, and the researchers observed shifts in brainwave activity that indicated a heightened mental workload as challenges intensified.

The e-tattoo consists of a battery pack, reusable chips, and a disposable sensor, making it a practical solution for real-time monitoring. Currently, the device is a lab prototype, with a production cost of approximately $200.

Dr. Lu highlighted that further development is necessary before the e-tattoo can be commercialized. This includes enhancing the device’s ability to decode mental workload in real-time and validating its effectiveness with a larger group of participants in more realistic settings.

As the demand for effective stress management tools in high-pressure jobs continues to grow, the e-tattoo represents a promising advancement in cognitive performance monitoring, potentially transforming how professionals manage their mental workload.

According to Fox News, the e-tattoo could pave the way for improved performance and training in various high-stakes occupations.

Sheel Dodani Receives $100,000 Hackerman Award for Protein Research

Indian American scientist Sheel Dodani has been awarded the prestigious $100,000 Hackerman Award for her innovative research in protein technology aimed at enhancing human health and environmental sustainability.

Sheel Dodani, an Indian American scientist, has received the esteemed 2026 Norman Hackerman Award in Chemical Research from The Welch Foundation. This award, which includes a $100,000 prize and a bronze sculpture, recognizes her groundbreaking work in the field of engineered proteins, specifically their application as anion sensors in biological systems.

Dr. Dodani is an associate professor of chemistry and biochemistry at the University of Texas at Dallas. Her research has been described as “using creative and daring chemistry to engineer technologies” that significantly contribute to human health and environmental improvement. Fred Brazelton, chair and director of The Welch Foundation, praised her achievements, stating, “Dr. Dodani is using creative and daring chemistry to engineer technologies that can measure and manipulate anions in living systems for the betterment of human health and the environment.”

The Hackerman Award is named after the foundation’s former scientific advisory board chair and aims to honor the accomplishments of early-career chemical scientists in Texas who are committed to advancing the fundamental understanding of chemistry. The award not only highlights individual achievement but also underscores the importance of innovative research in the scientific community.

Dr. David Hyndman, dean of the School of Natural Sciences and Mathematics at UT Dallas, remarked on the significance of Dodani’s work, stating, “Sheel Dodani’s research is opening an important new window into the chemistry of life.”

Dodani’s research group has developed the first coherent suite of genetically engineered fluorescent proteins that serve as biosensors for inorganic anions. While much attention has been given to cations—positively charged particles that are crucial for biological processes—anions, or negatively charged particles, have not been as thoroughly explored. This gap in understanding is particularly notable given the vital role that anions play in various biological functions.

One prominent example of an anion is chloride, which is essential for regulating fluid balance, blood pressure, and pH levels in the human body. The biosensors developed by Dodani have revolutionized researchers’ ability to track and visualize the behavior and interactions of these biologically significant anions in real time within living systems.

By utilizing fluorescent biosensors, researchers can now observe how anions behave in cells, paving the way for new therapeutic avenues. This includes the potential identification of small molecules that could treat chloride channel dysfunctions associated with diseases such as cystic fibrosis.

Reflecting on her research journey, Dodani noted, “This work began with a fundamental question: How can we bind an anion in water?” She explained that her team turned to nature’s supramolecular machines—proteins—to find answers. Through protein engineering, they have unlocked new functionalities in fluorescent proteins that enable the observation of anion biology, which has traditionally been challenging to study directly in living cells.

Dodani expressed gratitude for the support from The Welch Foundation, stating, “The Welch Foundation gave us the opportunity to pursue this direction early on. At the time, there was no established framework for investigating anions in water, let alone in living systems. By integrating concepts from different disciplines, we have started to answer questions that were previously out of reach.”

The Welch Foundation plays a crucial role in providing resources that allow researchers like Dodani to take risks in their scientific inquiries. This support is vital for those who aim to tackle complex questions that could have significant implications for human health and the environment.

Born and raised in Plano, Texas, Dodani completed her Bachelor of Science in chemistry at UT Dallas. She then pursued her PhD at the University of California, Berkeley, followed by a postdoctoral fellowship at the California Institute of Technology. In 2016, she returned to UT Dallas as a faculty member in the School of Natural Sciences and Mathematics, where she continues to make impactful contributions to the field of chemistry.

According to The American Bazaar, Dodani’s innovative research not only enhances our understanding of anions but also holds promise for future advancements in medical and environmental applications.

Arvind KC Appointed to Lead Global Expansion Efforts at OpenAI

OpenAI has appointed Arvind KC, a former Google executive, as Chief People Officer to enhance talent acquisition and workplace culture amid the company’s rapid expansion.

OpenAI has announced the appointment of Arvind KC as its new Chief People Officer, marking a significant addition to the leadership team of one of the world’s most scrutinized artificial intelligence companies.

KC, who previously held executive roles at Google and Roblox, will oversee human resources and internal scaling efforts at OpenAI during a period of rapid growth in both headcount and global influence.

With a strong foundation in both technical and managerial disciplines, KC brings a unique perspective to the role. He earned a bachelor’s degree in chemical engineering from the University Institute of Chemical Technology (UICT) in Mumbai, India, a prestigious institution known for its rigorous engineering programs.

Following his education in India, KC moved to the United States to pursue an MBA with a focus on operations management from Santa Clara University. This combination of technical knowledge and strategic management has positioned him well for leadership roles in high-growth technology environments.

Throughout his career, KC has navigated the complexities of rapidly scaling organizations. Most recently, he served as Chief People and Systems Officer at Roblox, where he aligned workforce strategy with internal technical systems to support the company’s growth.

Before his tenure at Roblox, KC was a Vice President at Google, where he led global engineering teams. His experience in engineering-heavy roles at companies like Palantir and Facebook (now Meta) allows him to effectively communicate with the researchers and developers he will now manage.

In his new position at OpenAI, KC is tasked with humanizing the company’s rapid expansion, which is often viewed through the lens of its algorithms. His responsibilities will include overseeing global talent acquisition, employee development, and fostering a workplace culture that can withstand the scrutiny faced by the AI sector.

“Arvind’s experience leading global teams at some of the world’s most innovative companies will be invaluable as we continue to grow,” OpenAI stated, highlighting his proven track record in managing large-scale organizational transitions.

This appointment signals a maturation phase for the San Francisco-based firm as it transitions from a small research lab to a global commercial powerhouse. The emphasis on the “human” element of operations reflects a strategic priority for OpenAI as it seeks to attract and retain top talent in a competitive labor market.

KC is expected to bridge the gap between ambitious technical objectives and the everyday needs of a world-class workforce, ensuring that OpenAI remains an attractive destination for elite professionals.

According to The American Bazaar, this leadership change underscores OpenAI’s commitment to developing a robust organizational culture as it continues to expand its reach in the AI industry.

Apple Warns Users of Scam Emails Targeting App Passwords

A recent phishing scam impersonating Apple warns users of a fraudulent $2,990 PayPal charge, urging them to call a fake support number, prompting cybersecurity experts to issue warnings.

A new phishing scam targeting Apple users has emerged, featuring a deceptive email claiming that an app-specific password was generated for the recipient’s account. The email falsely states that the user authorized a $2,990.02 charge through PayPal and includes a confirmation number, urging the recipient to call a support number immediately. However, this message is a classic example of a phishing scam.

The email is designed to instill panic and urgency in recipients. It appears to be professionally crafted, using Apple branding and mentioning Apple Support. However, upon closer inspection, several red flags indicate that the message is not legitimate.

One of the most significant warning signs is the “To” field, which displays an email address that does not match the recipient’s actual Apple ID. Legitimate emails from Apple are sent directly to the email address associated with the user’s Apple ID. If the visible recipient address differs from yours, it is likely a mass-mailed or spoofed message, a common tactic used by scammers.

Scammers often use large sums of money, like the nearly $3,000 charge mentioned in this email, to provoke fear and prompt quick action from recipients. The goal is to create a sense of urgency that leads individuals to act without thinking critically about the situation.

The email also instructs recipients to call a specific phone number, which does not belong to Apple. Authentic Apple security communications typically direct users to log into their accounts directly rather than pressuring them to call an unfamiliar support line. If a recipient calls this number, they may be connected to a scammer who could extract personal information or financial details.

Additionally, the email contains links that appear to lead to official Apple resources, such as “Apple Account” and “Apple Support.” However, these links may be disguised, leading to malicious websites instead. It is crucial to avoid clicking on links in suspicious emails and instead navigate to official websites by typing the URL directly into a browser.

Another red flag is the mismatch between the email’s subject and its content. While the subject mentions an app-specific password, the body of the email suddenly shifts to discussing a PayPal transaction. This inconsistency is a common tactic used by scammers to heighten urgency and confusion.

The email begins with a generic greeting, “Dear Customer,” rather than addressing the recipient by name. This impersonal approach is typical of bulk phishing emails, which often lack the personalization found in legitimate communications from trusted companies.

Moreover, the email’s Reply-To field may show an address that appears to be from Apple, such as appleid-usen@email.apple.com. However, scammers can easily spoof sender information, making it look like the message is coming from a trusted source. Users should be cautious and evaluate all red flags collectively rather than relying solely on the sender’s address.

The language used in the email is also a telltale sign of a scam. Phrases like “You authorized a USD 2,990.02 payment to apple.com using PayPal” sound awkward and unnatural. Genuine Apple receipts typically reference specific products or subscriptions rather than vague payment notifications tied to password alerts.

Furthermore, the email may display a masked address or an unusual domain, such as relay.quickinvoicesus.com, which does not conform to standard Apple formatting. Legitimate Apple communications will reference the user’s Apple ID directly, not an unrelated invoice-style domain.

Scammers often create a sense of urgency by urging recipients to call immediately to report an unauthorized transaction. This tactic is a hallmark of phishing schemes, as legitimate companies encourage users to log in securely to their accounts rather than rushing them into calling a third-party number.

Once on the phone with a scammer, victims may be led to provide sensitive information or even financial details, resulting in losses that far exceed the fake $2,990 charge mentioned in the email.

If you receive an email of this nature, it is essential to take a moment to pause and assess the situation. Instead of clicking on links or calling numbers provided in the email, verify the details by visiting the official Apple and PayPal websites directly. If you did not generate an app-specific password and see no suspicious charges, you are likely safe.

To protect yourself from phishing scams, consider implementing a few smart habits. Enable two-factor authentication (2FA) on your Apple ID, PayPal, and email accounts. This additional layer of security can prevent unauthorized access even if someone guesses your password.

Always be cautious when an email urges you to call support or click on links. Instead, navigate directly to official websites by typing the addresses into your browser. Ensure that you have strong antivirus software installed on your devices, as it can help detect malicious links and block phishing sites.

Regularly update your software to fix vulnerabilities that attackers may exploit. Outdated software can make it easier for phishing and malware attacks to succeed. Additionally, avoid reusing passwords across different accounts, as this practice can put your entire digital life at risk if one account is compromised.

If you suspect that your email has been exposed in a data breach, consider using a password manager that includes a breach scanner to check for compromised credentials. Reducing the amount of personal information available online can also help decrease your risk of falling victim to phishing scams.

Lastly, report any suspicious emails to Apple at reportphishing@apple.com and mark them as phishing through your email provider. This action helps improve filters and protects others from becoming victims.

In the face of increasingly sophisticated phishing scams, it is vital to remain vigilant and informed. If you receive an email claiming to be from Apple regarding an app-specific password and a large PayPal charge, trust your instincts—it’s likely a scam. Always verify through official channels to protect your personal and financial information.

According to a PayPal spokesperson, “PayPal does not tolerate fraudulent activity, and we work hard to protect our customers from evolving phishing scams. We always encourage consumers to practice vigilance online and to learn how to spot the warning signs of common fraud.”

Astronauts Return to Earth After ISS Mission Rescues Stranded Crew

A NASA crew successfully splashed down in the Pacific Ocean after completing a mission to the International Space Station, marking the agency’s first Pacific landing in 50 years.

NASA astronauts Anne McClain and Nichole Ayers, along with international crew members Takuya Onishi from Japan and Kirill Peskov from Russia, returned to Earth on Saturday, splashing down in the Pacific Ocean off the coast of Southern California. The landing occurred at 11:33 a.m. ET in a SpaceX capsule, marking a significant milestone as it was NASA’s first Pacific splashdown in five decades.

The crew’s mission involved relieving two astronauts, Suni Williams and Butch Wilmore, who had been stranded aboard the International Space Station (ISS) for nine months. Their extended stay was due to issues with the Boeing Starliner capsule, which had experienced thruster problems and helium leaks. NASA ultimately deemed it too risky to return Williams and Wilmore in the Starliner, which flew back to Earth without a crew. Instead, the two astronauts returned home in a SpaceX capsule after their replacements arrived.

Wilmore announced his retirement from NASA earlier this week after a distinguished 25-year career. Reflecting on their mission, McClain expressed hopes that it would serve as a reminder of the power of collaboration and exploration, especially during challenging times on Earth. She shared her anticipation of enjoying some downtime upon her return, while her crewmates looked forward to indulging in hot showers and burgers.

This mission also marked a change for SpaceX, which opted to switch its splashdown locations from Florida to California to minimize the risk of debris falling on populated areas. After exiting the spacecraft, the crew underwent medical checks before being transported by helicopter to meet a NASA aircraft bound for Houston.

Steve Stich, manager of NASA’s Commercial Crew Program, expressed satisfaction with the mission’s outcome during a post-splashdown press conference. “Overall, the mission went great, glad to have the crew back,” he stated. “SpaceX did a great job of recovering the crew again on the West Coast.”

Dina Contella, deputy manager for NASA’s International Space Station program, echoed this sentiment, noting her happiness at seeing the Crew 10 team back on Earth. She remarked that the crew had orbited the Earth 2,368 times and traveled more than 63 million miles during their 146 days in space.

This successful mission underscores the ongoing collaboration between NASA and commercial partners like SpaceX, as they work together to advance human space exploration.

According to Fox News, the mission’s success highlights the resilience and adaptability of space travel in the modern era.

11 Indian-American Innovators Recognized in Forbes’ 250 Greatest Innovators

Forbes has recognized 11 Indian Americans in its “250 America’s Greatest Innovators” list, highlighting their significant contributions to technology and medicine as the nation celebrates its 250th anniversary.

Forbes recently unveiled its “250 America’s Greatest Innovators” list to commemorate the United States’ 250th anniversary, showcasing a diverse group of visionary founders and executives who are reshaping global technology and medicine. Among the honorees are 11 Indian Americans, whose groundbreaking work spans from the early days of the internet to the cutting-edge developments in generative AI.

Leading this distinguished group is Vinod Khosla, co-founder of Sun Microsystems and a prominent venture capitalist, who secured the No. 10 spot. Khosla is renowned for his “black swan” investing style, with early investments in OpenAI and green technology solidifying his reputation as a leading risk-taker in the industry.

Close behind Khosla are tech giants Satya Nadella and Sundar Pichai, who have been instrumental in “re-founding” Microsoft and Alphabet, respectively. Their leadership has pivoted these legacy companies toward an AI-first future, reflecting the transformative power of innovation in the tech landscape.

The Forbes list emphasizes that innovation is often a marathon rather than a sprint. Suma Krishnan, who ranks No. 127, has made significant strides in treating “butterfly skin” disease. She co-founded Krystal Biotech in her 50s to develop the first topical gene therapy, marking a pivotal moment in medical innovation.

Similarly, Jay Chaudhry, ranked No. 128, has been recognized for his pioneering work in “zero trust” cloud security at Zscaler, which has disrupted the traditional firewall industry and redefined security protocols in the digital age.

The Indian American diaspora continues to make substantial contributions to technical infrastructure. Neha Narkhede, co-founder of Confluent and now CEO of Oscilar, is celebrated at No. 155 for her work in real-time data streaming. At MIT, Sangeeta Bhatia, ranked No. 161, has been honored for her innovative approach to merging microchips with biology, revolutionizing drug testing methodologies.

The diversity of this group extends into the daily lives of millions. Aman Narang, who ranks No. 177, has transformed the restaurant industry with Toast’s management platform. Baiju Bhatt, at No. 183, has democratized retail investing through Robinhood and is now pivoting to space-based solar power with Aetherflux. Naval Ravikant, ranked No. 230, has broadened access to startup funding via AngelList, further contributing to the entrepreneurial ecosystem.

The final names on the list reflect a commitment to human equity and efficiency. Shiv Rao, ranked No. 235, has been recognized for his AI medical scribe, Abridge, which automates clinical documentation to alleviate physician burnout. Shan Sinha, at No. 202, has made significant contributions to data management and healthcare safety, while Shivani Siroya, ranked No. 238, has been lauded for her work with Tala, which utilizes mobile data to provide credit to the “unbanked” in emerging markets.

This impressive collection of 11 innovators underscores a robust pipeline of talent that has become essential to the American economy. Whether they began their journeys in a garage or now lead major conglomerates, these individuals have successfully transformed complex scientific and digital theories into everyday realities.

According to Forbes, the achievements of these innovators highlight the critical role that diverse perspectives play in driving progress and shaping the future.

Four Indian-American Researchers Selected as 2026 Sloan Research Fellows

Four Indian American researchers have been awarded the 2026 Sloan Research Fellowships, recognizing their contributions to science and innovation in their respective fields.

Four Indian American researchers have been named among the 126 recipients of the prestigious 2026 Sloan Research Fellowships. Aayush Jain, Arun Kumar Kuchibhotla, and Aditi Raghunathan from Carnegie Mellon University, along with Anand Natarajan from the Massachusetts Institute of Technology (MIT), have been honored for their exceptional research accomplishments.

The Sloan Research Fellowships, awarded annually by the Alfred P. Sloan Foundation, celebrate early-career researchers who demonstrate creativity and innovation in their fields. Each fellowship includes a two-year grant of $75,000, which can be utilized flexibly to support the fellow’s research initiatives.

Stacie Bloom, president and CEO of the Alfred P. Sloan Foundation, remarked, “The Sloan Research Fellows are among the most promising early-career researchers in the U.S. and Canada, already driving meaningful progress in their respective disciplines. We look forward to seeing how these exceptional scholars continue to unlock new scientific advancements, redefine their fields, and foster the well-being and knowledge of all.”

Aayush Jain serves as an assistant professor in the Computer Science Department at Carnegie Mellon University. His research focuses on theoretical and applied cryptography, particularly the mathematical foundations that ensure the security of modern cryptographic systems. Jain aims to identify new sources of computational hardness and strengthen the long-term security of encrypted computation, addressing critical gaps in post-quantum cryptography. Additionally, he is dedicated to training graduate students in foundational cryptographic theory.

Arun Kumar Kuchibhotla, an associate professor in the Department of Statistics and Data Science at Carnegie Mellon, tackles foundational challenges in statistical inference and predictive learning. His work has significant applications in machine learning and artificial intelligence, where he develops robust, “assumption-lean” frameworks for uncertainty quantification. Kuchibhotla’s research also contributes to financial time series forecasting and causal inference significance testing. He has pioneered “honest inference” procedures, such as the Hull-based Confidence Method (HulC), which maintain validity in high-dimensional and irregular settings where traditional methods often falter.

Aditi Raghunathan, also an assistant professor in the Computer Science Department at Carnegie Mellon, focuses on understanding the vulnerabilities of AI systems and developing models that are safe, accurate, and reliable in real-world applications. She leads the AI Reliability Lab, which is dedicated to creating trustworthy AI through rigorous analysis and principled methodologies. Raghunathan’s research has garnered recognition at prestigious conferences and plays a crucial role in promoting responsible AI system design and deployment.

Anand Natarajan, an associate professor in Electrical Engineering and Computer Science at MIT, is a principal investigator at the Computer Science and Artificial Intelligence Lab and the MIT-IBM Watson AI Lab. His research primarily revolves around quantum complexity theory, exploring the power of interactive proofs and arguments within a quantum framework. Natarajan’s work aims to evaluate the complexity of computational problems in quantum settings, assessing both the capabilities and the reliability of quantum computers. He holds a PhD in physics from MIT, along with an MS in computer science and a BS in physics from Stanford University. Before joining MIT in 2020, he was a postdoctoral researcher at the Institute for Quantum Information and Matter at Caltech.

The recognition of these four researchers underscores the significant contributions of Indian Americans in advancing scientific knowledge and innovation. Their work not only enhances their respective fields but also sets a foundation for future breakthroughs in technology and research.

According to The American Bazaar, the Sloan Research Fellowships continue to highlight the importance of supporting early-career scientists who are poised to make substantial impacts in their disciplines.

Indian-American Billionaire Vinod Khosla Criticizes Ro Khanna, Bernie Sanders on AI

Indian American billionaire Vinod Khosla criticized U.S. lawmakers Ro Khanna and Bernie Sanders for their warnings about artificial intelligence in a recent post on social media platform X.

Indian American billionaire Vinod Khosla has publicly expressed his discontent with U.S. lawmakers Ro Khanna and Bernie Sanders. In a recent post on X, Khosla launched a scathing critique of their warnings regarding the potential negative consequences of artificial intelligence (AI).

In his post, Khosla stated, “Bernie Sanders, Ro Khanna warn of AI’s potential negative consequences. Morons like Ro Khanna and Bernie Sanders will stop all the good AI can do to protect their religion. Good intentions but bad outcomes is ok for these socialists/commie.”

Vinod Khosla is a well-known Indian-American entrepreneur, venture capitalist, and technology investor. Born in 1955 in India, Khosla began his academic journey as an electrical engineer at the Indian Institute of Technology (IIT) Delhi, later earning a Master’s degree in Biomedical Engineering from Carnegie Mellon University. His career took off at Sun Microsystems, where he was part of the founding team that contributed to the company’s early success.

Khosla gained significant recognition as a co-founder of Kleiner Perkins Caufield & Byers, one of Silicon Valley’s most influential venture capital firms, focusing primarily on technology investments. In 2004, he established Khosla Ventures, which invests in clean technology, biotechnology, and disruptive startups. Known for his bold investment strategies and advocacy for technological innovation, Khosla has played a pivotal role in shaping the investment landscape of Silicon Valley, often taking high-risk bets that challenge conventional approaches.

The recent exchange between Khosla and the lawmakers followed a town hall meeting at Stanford University on February 20, 2026. During this event, Sanders articulated concerns that artificial intelligence is advancing at a pace that existing economic and political systems cannot adequately manage. He further questioned Silicon Valley’s assertions that AI will inherently deliver broad public benefits, recalling similar claims made during previous technological advancements that ultimately resulted in increased wealth and power concentration.

This clash between Khosla and U.S. lawmakers underscores a broader tension at the intersection of technology, policy, and societal oversight. It reflects the ongoing debate about how rapidly emerging technologies, particularly artificial intelligence, should be guided, regulated, and integrated into public life. Advocates like Khosla emphasize the transformative potential of AI in addressing complex global challenges, from healthcare innovations to energy efficiency. They argue that excessive regulation could stifle progress and limit the benefits that AI could provide.

On the other hand, critics such as Sanders and Khanna highlight the necessity for caution, stressing that technological advancements often outpace the social, economic, and ethical frameworks required for responsible management. Their concerns are rooted in historical patterns where technological optimism has sometimes led to concentrated wealth and power, along with unforeseen societal consequences.

The ongoing dialogue between Khosla and lawmakers illustrates the complexities surrounding the development and implementation of artificial intelligence, a technology that promises significant advancements but also raises critical ethical and regulatory questions.

According to The American Bazaar, this exchange is part of a larger conversation about the future of AI and its impact on society.

-+=