Blogs

From Our Blog

Check our some recent articles and posts from our blog.

Meta AI’s new chatbot raises privacy alarms

Meta’s new AI chatbot is getting personal, and it might be sharing more than you realize. A recent app update introduced a "Discover" feed that makes user-submitted chats public, complete with prompts and AI responses. Some of those chats include everything from legal troubles to medical conditions, often with names and profile photos still attached. The result is a privacy nightmare in plain sight.

If you’ve ever typed something sensitive into Meta AI, now is the time to check your settings and find out just how much of your data could be exposed.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER.

Meta’s AI app, launched in April 2025, is designed to be both a chatbot and a social platform. Users can chat casually or deep dive into personal topics, from relationship questions to financial concerns or health issues.

What sets Meta AI apart from other chatbots is the "Discover" tab, a public feed that displays shared conversations. It was meant to encourage community and creativity, letting users showcase interesting prompts and responses. Unfortunately, many didn’t realize their conversations could be made public with just one tap, and the interface often fails to make the public/private distinction clear.

The feature positions Meta AI as a kind of AI-powered social network, blending search, conversation, and status updates. But what sounds innovative on paper has opened the door to major privacy slip-ups.

Privacy experts are sounding the alarm over Meta's Discover tab, calling it a serious breach of user trust. The feed surfaces chats containing legal dilemmas, therapy discussions, and deeply personal confessions, often linked to real accounts. In some cases, names and profile photos are visible. Although Meta says only shared chats appear, the interface makes it easy to hit "share" without realizing it means public exposure. Many assume the button saves the conversation privately. Worse, logging in with a public Instagram account can make shared AI activity publicly accessible by default, increasing the risk of identification.

Some posts reveal sensitive health or legal issues, financial troubles, or relationship conflicts. Others include contact details or even audio clips. A few contain pleas like "keep this private," written by users who didn't realize their messages would be broadcast. These aren't isolated incidents, and as more people use AI for personal support, the stakes will only get higher.

If you're using Meta AI, it's important to check your privacy settings and manage your prompt history to avoid accidentally sharing something sensitive.  To prevent accidentally sharing sensitive prompts and ensure your future prompts stay private:

On a phone: (iPhone or Android) 

On the website (desktop): 

Fortunately, you can change the visibility of prompts you've already posted, delete them entirely, and update your settings to keep future prompts private.

On a phone: (iPhone or Android) 

On the website (desktop):

If other users replied to your prompt before you made it private, those replies will remain attached but won’t be visible unless you reshare the prompt. Once reshared, the replies will also become visible again.

On both the app and the website:

This issue isn’t unique to Meta. Most AI chat tools, including ChatGPT, Claude, and Google Gemini, store your conversations by default and may use them to improve performance, train future models, or develop new features. What many users don’t realize is that their inputs can be reviewed by human moderators, flagged for analysis, or saved in training logs.

Even if a platform says your chats are "private," that usually just means they aren’t visible to the public. It doesn’t mean your data is encrypted, anonymous, or protected from internal access. In many cases, companies retain the right to use your conversations for product development unless you specifically opt out, and finding that opt-out isn't always straightforward.

If you’re signed in with a personal account that includes your real name, email address, or social media links, your activity may be easier to connect to your identity than you think. Combine that with questions about health, finances, or relationships, and you’ve essentially created a detailed digital profile without meaning to.

Some platforms now offer temporary chat modes or incognito settings, but these features are usually off by default. Unless you manually enable them, your data is likely being stored and possibly reviewed.

The takeaway: AI chat platforms are not private by default. You need to actively manage your settings, be mindful of what you share, and stay informed about how your data is being handled behind the scenes.

AI tools can be incredibly helpful, but without the right precautions, they can also open you up to privacy risks. Whether you're using Meta AI, ChatGPT, or any other chatbot, here are some smart, proactive ways to protect yourself:

1) Use aliases and avoid personal identifiers: Don’t use your full name, birthday, address, or any details that could identify you. Even first names combined with other context can be risky.

2) Never share sensitive information: Avoid discussing medical diagnoses, legal matters, bank account info, or anything you wouldn’t want on the front page of a search engine.

3) Clear your chat history regularly: If you’ve already shared sensitive info, go back and delete it. Many AI apps let you clear chat history through Settings or your account dashboard.

4) Adjust privacy settings often: App updates can sometimes reset your preferences or introduce new default options. Even small changes to the interface can affect what’s shared and how. It’s a good idea to check your settings every few weeks to make sure your data is still protected.

5) Use an identity theft protection service: Scammers actively look for exposed data, especially after a privacy slip. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. Visit Cyberguy.com/IdentityTheft for tips and recommendations.

6) Use a VPN for extra privacy: A reliable VPN hides your IP address and location, making it harder for apps, websites, or bad actors to track your online activity. It also adds protection on public Wi-Fi, shielding your device from hackers who might try to snoop on your connection. For best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at Cyberguy.com/VPN.

7) Don’t link AI apps to your real social accounts: If possible, create a separate email address or dummy account for experimenting with AI tools. Keep your main profiles disconnected.  To create a quick email alias you can use to keep your main accounts protected visit Cyberguy.com/Mail.
 

Meta’s decision to turn chatbot prompts into social content has blurred the line between private and public in a way that catches many users off guard. Even if you think your chats are safe, a missed setting or default option can expose more than you intended. Before typing anything sensitive into Meta AI or any chatbot, pause. Check your privacy settings, review your chat history, and think carefully about what you're sharing. A few quick steps now can save you from bigger privacy headaches later.

With so much sensitive data potentially at risk, do you think Meta is doing enough to protect your privacy, or is it time for stricter guardrails on AI platforms? Let us know by writing to us at Cyberguy.com/Contact.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER.

Copyright 2025 CyberGuy.com. All rights reserved. 
 

SparkKitty mobile malware targets Android and iPhone

Bad actors constantly seek every bit of personal information they can get, from your phone number to your government ID. Now, a new threat targets both Android and iPhone users: SparkKitty, a powerful mobile malware strain that scans private photos to steal cryptocurrency recovery phrases and other sensitive data.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER.

Researchers at cybersecurity firm Kaspersky recently identified SparkKitty. This malware appears to succeed SparkCat, a campaign first reported earlier this year that used optical character recognition (OCR) to extract sensitive data from images, including crypto recovery phrases.

SparkKitty goes even further than SparkCat. According to Kaspersky, SparkKitty uploads images from infected phones without discrimination. This tactic exposes not just wallet data but also any personal or sensitive photos stored on the device. While the main target seems to be crypto seed phrases, criminals could use other images for extortion or malicious purposes.

Kaspersky researchers report that SparkKitty has operated since at least February 2024. Attackers distributed it through both official and unofficial channels, including Google Play and the Apple App Store.

Kaspersky found SparkKitty embedded in several apps, including one called 币coin on iOS and another called SOEX on Android. Both apps are no longer available in their respective stores. SOEX, a messaging app with cryptocurrency-related features, reached more than 10,000 downloads from the Google Play Store before its removal.

On iOS, attackers deliver the malware through fake software frameworks or enterprise provisioning profiles, often disguised as legitimate components. Once installed, SparkKitty uses a method native to Apple’s Objective-C programming language to run as soon as the app launches. It checks the app’s internal configuration files to decide whether to execute, then quietly starts monitoring the user’s photo library.

On Android, SparkKitty hides in apps written in Java or Kotlin and sometimes uses malicious Xposed or LSPosed modules. It activates when the app launches or after a specific screen opens. The malware then decrypts a configuration file from a remote server and begins uploading images, device metadata, and identifiers.

Unlike traditional spyware, SparkKitty focuses on photos, especially those containing cryptocurrency recovery phrases, wallet screenshots, IDs, or sensitive documents. Instead of just monitoring activity, SparkKitty uploads images in bulk. This approach makes it easy for criminals to sift through and extract valuable personal data. 

1) Stick to trusted developers: Avoid downloading obscure apps, especially if they have few reviews or downloads. Always check the developer's name and history before installing anything.

2) Review app permissions: Be cautious of apps that request access to your photos, messages, or files without a clear reason. If something feels off, deny the permission or uninstall the app.

3) Keep your device updated: Install system and security updates as soon as they are available. These updates often patch vulnerabilities that malware can exploit.

4) Use mobile security software: The best way to safeguard yourself from malicious software is to have strong antivirus software installed on all your devices. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices by visiting CyberGuy.com/LockUpYourTech.

Both Apple and Google removed the identified apps after being alerted, but questions remain about how SparkKitty bypassed their app review processes in the first place. As app stores grow, both in volume and complexity, the tools used to screen them will need to evolve at the same pace. Otherwise, incidents like this one will continue to slip through the cracks.

Do you think Google and Apple are doing enough to protect users from mobile malware and evolving security threats? Let us know by writing to us at Cyberguy.com/Contact.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER.

Copyright 2025 CyberGuy.com. All rights reserved.

Paralyzed man speaks and sings with AI brain-computer interface

When someone loses the ability to speak because of a neurological condition like ALS, the impact goes far beyond words. It touches every part of daily life, from sharing a joke with family to simply asking for help. Now, thanks to a team at the University of California, Davis, there's a new brain-computer interface (BCI) system that's opening up real-time, natural conversation for people who can't speak. This technology isn't just about converting thoughts into text. Instead, it translates the brain signals that would normally control the muscles used for speech, allowing users to "talk" and even "sing" through a computer, almost instantly.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER.

The heart of this system is four microelectrode arrays, surgically implanted in the part of the brain responsible for producing speech. These tiny devices pick up the neural activity that happens when someone tries to speak. The signals are then fed into an AI-powered decoding model, which converts them into audible speech in just ten milliseconds. That's so fast, it feels as natural as a regular conversation.

What's especially remarkable is that the system can recreate the user's own voice, thanks to a voice cloning algorithm trained on recordings made before the onset of ALS. This means the person's digital voice sounds like them, not a generic computer voice. The system even recognizes when the user is trying to sing and can change the pitch to match simple melodies. It can also pick up on vocal nuances, like asking a question, emphasizing a word, or making interjections such as "aah," "ooh," or "hmm." All of this adds up to a much more expressive and human-sounding conversation than previous technologies could offer.

The process starts with the participant attempting to speak sentences shown on a screen. As they try to form each word, the electrodes capture the firing patterns of hundreds of neurons. The AI learns to map these patterns to specific sounds, reconstructing speech in real-time. This approach allows for subtle control over speech rhythm and tone, giving the user the ability to interrupt, emphasize, or ask questions just as anyone else would.

One of the most striking outcomes from the UC Davis study was that listeners could understand nearly 60 percent of the synthesized words, compared to just four percent without the BCI. The system also handled new, made-up words that weren't part of its training data, showing its flexibility and adaptability.

Being able to communicate in real-time, with one's own voice and personality, is a game-changer for people living with paralysis. The UC Davis team points out that this technology allows users to be more included in conversations. They can interrupt, respond quickly, and express themselves with nuance. This is a big shift from earlier systems that only translated brain signals into text, which often led to slow, stilted exchanges that felt more like texting than talking.

As David Brandman, the neurosurgeon involved in the study, put it, our voice is a core part of our identity. Losing it is devastating, but this kind of technology offers real hope for restoring that essential part of who we are.

While these early results are promising, the researchers are quick to point out that the technology is still in its early stages. So far, it's only been tested with one participant, so more studies are needed to see how well it works for others, including people with different causes of speech loss, like stroke. The BrainGate2 clinical trial at UC Davis Health is continuing to enroll participants to further refine and test the system.

Restoring natural, expressive speech to people who have lost their voices is one of the most meaningful advances in brain-computer interface technology. This new system from UC Davis shows that it's possible to bring real-time, personal conversation back into the lives of those affected by paralysis. While there's still work to be done, the progress so far is giving people a chance to reconnect with their loved ones and the world around them in a way that truly feels like their own.

As brain-computer interfaces become more advanced, where should we draw the line between enhancing lives and altering the essence of human interaction? Let us know by writing to us at Cyberguy.com/Contact.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER.

Copyright 2025 CyberGuy.com. All rights reserved.

9 online privacy risks you probably don’t know about

Privacy risks are hiding in plain sight, as your personal data is likely being collected, tracked, and sold without your knowledge. It's not just your name and email out there-data brokers are gathering much more sensitive information about your daily life, including your sleep patterns, medical visits, online habits, and even your relationship status. These details are compiled into detailed personal profiles and sold to advertisers, insurance companies, political campaigns, and in some cases, cybercriminals. What makes this especially concerning is that most of it happens quietly in the background, often without your consent.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join.

You may think you're protecting your privacy, but chances are you're revealing more than you think through your everyday digital activity.

Fitness trackers, bedtime apps, and even your phone’s settings feed data brokers info about when you sleep, wake, and work out. That’s highly sensitive health data.

A recent data leak exposed over 8 million patient records, allowing cybercriminals to build detailed medical profiles that could be used to commit identity theft, insurance fraud, and conduct phishing attacks. Recent research reveals that over 28% of Americans had their SSN breached since 2020, exposing them to increased risk of experiencing cyberattacks.

YOUR HEALTH DATA IS BEING SOLD WITHOUT YOUR CONSENT 

Every time you binge a show or stream a documentary, your smart TV, streaming apps, and browser record exactly what you’re watching, when you watch it, and how long you stay tuned in. This data helps build a behavioral profile of your tastes, routines, and emotional triggers.

It’s not just used for harmless recommendations; advertisers and data brokers tap into this to predict your mood, interests, and even potential vulnerabilities. Ever wonder why oddly specific ads start showing up after a documentary binge? This is why. 

It’s not just the articles you click, it’s how long you linger on them that matters. Data brokers monitor whether you skim or dive deep into topics like health scares, financial worries, or personal relationships.

The time you spend on certain pages helps them identify your fears, desires, and private interests. This insight can later be used for hyper-targeted ads or, worse, by malicious actors looking to exploit your anxieties.

16 BILLION PASSWORDS LEAKED IN MASSIVE DATA BREACH 

You might keep your relationship off social media, but your online footprint gives you away. Your purchase history, social check-ins, and frequent location visits tell data brokers whether you’re single, dating, engaged, or married.

They can even infer relationship trouble by analyzing certain patterns, like increased visits to bars or late-night takeout orders. This deeply personal information can end up in a detailed profile on some sketchy website you’ve never heard about.

Your phone’s location data doesn’t just map your commute; it tracks visits to places like fertility clinics, addiction centers, and therapists’ offices. That data gets sold to brokers who categorize you based on these visits, sometimes flagging you for health-related concerns you haven’t publicly shared.

One study found that 74% of health-related data was sold without users’ knowledge or consent. This information could be used to hike insurance rates, deny you payouts, or target you with sensitive, intrusive ads.

Public records make it easy for data brokers to access your home’s value, tax history, and neighborhood crime rates. These are used to target you with aggressive refinancing offers, alarm system ads, or moving service promotions.

Scammers also use this data to profile households they think are vulnerable based on property values or crime rates. The result is an increased flood of junk mail, spam calls, targeted online ads you never asked for, or worse, like physical safety risks. 

By monitoring Wi-Fi connections, shared deliveries, smart home devices, and online purchase patterns, data brokers can determine exactly how many people live in your home. They often build profiles on your family members too—even if they’ve never created an online account themselves.

This allows advertisers to tailor ads to your household, making your family’s online activities part of your digital profile. It’s invasive, and most people have no idea it’s happening. 

Even if you keep politics off your social media feeds, your browsing history tells a different story. The news articles you read, political newsletters you subscribe to, and nonprofits you donate to all get tracked.

Data brokers use this to place you on lists of likely voters for certain parties or causes. This can lead to politically targeted ads, donation requests, and even manipulation attempts around election seasons, all without your explicit permission.

10 SIGNS YOUR PERSONAL DATA IS BEING SOLD ONLINE 

The internet picks up on your major life milestones long before you announce them. If you start browsing for engagement rings, baby gear, or moving boxes, data brokers immediately flag those behaviors.

This triggers waves of ads and marketing campaigns designed to capitalize on your upcoming life changes. In many cases, you’ll start seeing offers and promotions months before you tell your closest friends or family members.
 

While no service can completely erase every trace of your data online, using a trusted data removal service is one of the most effective steps you can take. These services actively monitor and submit removal requests to hundreds of data broker websites, saving you hours of tedious work. It is not cheap, but when it comes to protecting your personal privacy, the cost is worth it. Reducing the amount of exposed data tied to your name lowers your risk of being targeted by scammers who often combine breached data with what they find online. If you are ready to take control of your personal information, start with my top picks for data removal services. Check out my top picks for data removal services here.

Get a free scan to find out if your personal information is already out on the web

Your online activity reveals more than you think, and you do not need to overshare on social media for your data to end up in the wrong hands. Everything from your location history to your streaming habits can be tracked, sold, and used to build a profile on you. That profile can be used by advertisers, data brokers, political groups, or even cybercriminals. The good news is that you can push back. Being aware of what you are sharing is the first step. Second, using a trusted data removal service can make a real difference. You do not need to be paranoid, but you do need to be proactive. Taking control of your digital footprint is one of the smartest things you can do to protect your privacy in today's hyper-connected world.

Do you think more needs to be done to stop companies from being allowed to know everything about you while you’re left in the dark?  Let us know by writing us at Cyberguy.com/Contact.

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or let us know what stories you'd like us to cover.

Follow Kurt on his social channels

Answers to the most asked CyberGuy questions:

New from Kurt:

Copyright 2025 CyberGuy.com. All rights reserved.