From Our Blog
Figure data breach exposes nearly 1M accounts
If you have applied for a loan online, you probably shared more than you realized. Your name. Your email. Your date of birth. Maybe even your home address and phone number. Now imagine all of that sitting on a dark web forum.
That is the reality for nearly 1 million people after hackers breached Figure Technology Solutions, a blockchain-focused fintech lender.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Figure Technology Solutions, founded in 2018, uses the Provenance blockchain for lending, borrowing and securities trading. The company says it has unlocked more than $22 billion in home equity through partnerships with banks, credit unions, fintechs and home improvement companies. However, behind the scenes, attackers were working on a very different angle.
GOOGLE DROPPED DARK WEB MONITORING: SHOULD YOU CARE?
According to breach notification data shared by Have I Been Pwned, information from 967,200 accounts was exposed. The leaked data included more than 900,000 unique email addresses along with names, phone numbers, physical addresses and dates of birth. That is a gold mine for identity thieves. Figure says the incident stemmed from a social engineering attack. What that means in simple terms is that someone inside the company was tricked into handing over access.
"We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account," a Figure Technology Solutions spokesperson told CyberGuy in a statement. "We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate. We are also implementing additional safeguards and training to further strengthen our defenses. We are offering complimentary credit monitoring to all individuals who receive a notice. We continuously monitor accounts and have strong safeguards in place to protect customers' funds and accounts."
When people hear the word blockchain, they think secure and untouchable. But attackers did not break cryptography. They targeted a human being. Groups like ShinyHunters specialize in this playbook. They reportedly claimed responsibility for the breach and, according to BleepingComputer, posted 2.5GB of data allegedly tied to thousands of loan applicants.
In recent weeks, the same group has claimed breaches involving companies like Canada Goose, Panera Bread and SoundCloud. Not every case is connected. Still, security researchers have observed a troubling pattern. Attackers impersonate IT support. They call employees. They create urgency. Then they direct victims to fake login portals that look nearly identical to real ones.
Once employees enter credentials and even multi-factor authentication codes, attackers gain access to single sign-on systems tied to major platforms like Microsoft and Google. From there, one compromised account can unlock a web of connected tools and internal systems.
PANERA BREAD DATA BREACH EXPOSES 5.1M CUSTOMERS
Why this matters to you
If your information was part of the Figure data breach, criminals now have enough detail to craft convincing phishing emails or phone scams. They can reference your real name. They can cite your address. They can pretend to be a lender or bank calling about your application.
Even if you never applied for a loan with Figure, this incident highlights something bigger. No platform is immune to human error. And social engineering works because it targets trust, not technology.
Figure markets itself as blockchain native. Blockchain can provide transparency and strong cryptographic security. However, none of that protects against a well-crafted phone call.
Security failures often happen at the human layer. That is where attackers focus their energy. As more financial services move online, the attack surface grows. Loan applications, identity verification tools and cloud-based systems create convenience. They also create new targets.
How to protect yourself after the Figure data breach
You cannot control how companies secure their systems. You can control how you respond. Start by checking whether your email address appears in the exposed dataset, then take the steps below to lock down your accounts.
SUBSTACK DATA BREACH EXPOSES EMAILS AND PHONE NUMBERS
To see if your email address was affected, visit https://haveibeenpwned.com/. Enter your email address to find out whether your information appears in the leak. When finished, return here and begin Step 1 below.
Also, be cautious of unexpected calls about your accounts. If someone pressures you to act immediately, hang up and call the company directly using a number from its official website.
The Figure data breach is a reminder that technology alone cannot protect sensitive information. A single employee tricked into revealing credentials can expose hundreds of thousands of people. That is not a blockchain failure. It is a trust failure. If your data was involved, take action now. Even if it was not, treat this as a wake-up call. Your personal information has value. Criminals know it. Companies should know it too.
If one phone call can unlock nearly a million records, are companies investing enough in training people, or are they still betting everything on technology alone? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
China's compact humanoid robot shows off balance and flips
Humanoid robotics companies have already shown their machines can run at 22 mph, land backflips and even pull off front flips. So the new proving ground is not raw speed or acrobatics. It is control when something unexpected happens. That is where the EngineAI PM01 humanoid robot comes in.
In newly released footage, the compact humanoid keeps dancing after being deliberately pushed off balance. It performs a controlled forward slip, absorbs the disruption and smoothly regains rhythm within seconds. The motion looks fluid and surprisingly natural.
Then it lands another front flip, this time as part of a broader demonstration of balance and recovery.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
HUMANOID ROBOT MAKES ARCHITECTURAL HISTORY BY DESIGNING A BUILDING
Speed gets attention. Recovery earns trust. When someone shoves the PM01, it does not freeze. It recalculates its center of mass, adjusts joint torque and corrects posture in real time. That level of control depends on tight coordination between sensors, actuators and AI algorithms. The front flip adds another challenge.
Front flips are typically harder than backflips. Rotating forward shifts the body weight ahead of the support base. That makes landings less forgiving. The EngineAI PM01 humanoid robot executes the move with coordinated arm swing, core stabilization and accurate landing mechanics. This is not about flashy tricks. It is about controlled dynamic motion under stress.
The PM01 stands just under 4 feet tall. That smaller build works to its advantage. A lower center of mass reduces tipping risk and requires less rotational force during flips. Its lighter structure also helps distribute impact forces more efficiently when it lands.
By comparison, EngineAI's larger SE01 stands about 4 feet, 6 inches tall and weighs 88 pounds. The PM01 is roughly 10.5 inches shorter and about 17.6 pounds lighter. That size difference makes it more agile in research and development settings.
Full-sized humanoids face greater mechanical stress during high-impact maneuvers. They need stronger actuators, reinforced joints and heavier structural support to stay stable. Compact robots like the EngineAI PM01 can achieve advanced movement with less overall strain.
CHINA'S ROBOTICS GIANT PUTS 200 ROBOTS TO THE TEST
Under the hood, the EngineAI PM01 humanoid robot combines advanced perception with serious computing power. It uses an Intel RealSense depth camera for visual awareness and spatial mapping. A dual-chip setup integrates Nvidia Jetson Orin with an Intel N97 processor. That architecture supports real-time AI workloads and rapid balance correction when the robot is pushed or slips.
The robot features 24 degrees of freedom, including 12 joint motors. This design allows smooth coordinated movement across its limbs and torso. In the small humanoid segment, PM01 competes with models like the Unitree G1 and the Booster T1. It walks at up to about 4.5 miles per hour, faster than the T1, though still below some larger high-speed humanoid platforms built for sprint performance.
EngineAI appears less focused on headline-grabbing speed and more focused on refined stability and controlled motion.
As humanoid videos go viral, skepticism follows. EngineAI recently addressed CGI accusations by releasing footage of its T800 humanoid physically interacting with its CEO. The company clearly wants to demonstrate that its robots operate in the real world.
That credibility push matters. In a crowded robotics market, bold claims are common. Physical demonstrations help separate engineering progress from digital effects.
WARM-SKINNED AI ROBOT WITH CAMERA EYES IS SERIOUSLY CREEPY
Right now, this looks like a polished demo. However, balance and recovery are critical for real-world use. If humanoid robots are going to work in warehouses, hospitals or our homes, they must handle bumps, slips and unexpected contact without causing damage. A machine that can brace itself, fall safely and stand back up is far more practical than one that performs a single choreographed stunt. As humanoids move closer to everyday environments, resilience becomes just as important as athletic performance. The more stable they are, the more comfortable people will feel sharing space with them.
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.
Humanoid robots can already run fast, flip and move with serious athletic ability. What companies are racing to perfect now is something more practical: balance when things go wrong. The EngineAI PM01 humanoid robot shows how compact design and real-time correction can help a machine stay upright, recover quickly and keep moving without chaos. That kind of control matters far more in a crowded warehouse, hospital hallway or public space than a perfectly staged stunt. We are starting to see the shift from viral demo moments to robots built for everyday reliability. The real breakthrough is not the flip. It is what happens after the push.
When humanoid robots can absorb a shove, land a flip and get back to work without missing a beat, how close are we to seeing them in your neighborhood? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Why the Microsoft 365 Copilot bug matters for data security
You trust your email security settings for a reason. So when an AI assistant quietly reads and summarizes messages marked confidential, that trust takes a hit.
Microsoft says a bug in Microsoft 365 Copilot allowed its AI chat feature to process sensitive emails since late January.
The issue bypassed Data Loss Prevention policies that organizations rely on to protect private information. Put simply, emails that were supposed to stay locked down were being summarized anyway.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK
Microsoft says a coding error impacted Microsoft 365 Copilot Chat, specifically the "work tab" feature. The AI assistant helps business users summarize content, draft responses and analyze information across Word, Excel, PowerPoint, Outlook and OneNote.
Beginning Jan. 21, an internal bug labeled CW1226324 caused Copilot to read and summarize emails stored in Sent Items and Drafts folders.
The real concern runs deeper. Several of those messages carried confidentiality or sensitivity labels.
Companies apply those labels along with DLP policies to block automated systems from accessing restricted content. Despite those safeguards, Copilot still generated summaries.
We reached out to Microsoft, and a spokesperson provided CyberGuy with the following statement:
"We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren't already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers."
AI tools feel helpful. They save time and reduce busy work. But they also rely on deep access to your data. When safeguards fail, even temporarily, sensitive content can move in ways you did not expect.
YOUR PHONE SHARES DATA AT NIGHT: HERE'S HOW TO STOP IT
For businesses, that could mean:
Legal discussions summarized outside intended controls
Financial projections processed despite restrictions
HR communications are exposed to automated analysis
Even if no data leaves the organization, the bypass itself raises concerns about how AI integrates with enterprise security systems.
Microsoft says it began rolling out a fix in early February. The company continues to monitor deployment and is contacting some affected users to verify the fix works.
However, Microsoft has not provided a final timeline for full remediation. It has also not disclosed how many organizations were affected.
The issue is tagged as an advisory, which usually signals limited scope or impact. Still, many security professionals will want deeper clarity before feeling comfortable.
This incident highlights something many companies are wrestling with right now. AI assistants sit inside productivity platforms. They need access to email, documents and collaboration tools to work well.
TIKTOK AFTER THE US SALE: WHAT CHANGED AND HOW TO USE IT SAFELY
At the same time, those platforms contain your most sensitive information. When AI features expand quickly, security policies must evolve just as fast. Otherwise, even a small code mistake can create unexpected exposure.
If your organization uses Microsoft 365 Copilot, here are practical steps to reduce risk:
Work with your IT team to confirm which folders and data sources Copilot can access.
Test sensitivity labels and DLP (Data Loss Prevention) rules to ensure they block AI processing as intended.
Stay current on Microsoft service alerts and verify that the fix is fully deployed in your tenant.
If you have concerns, consider temporarily restricting Copilot features until verification is complete.
Remind staff that AI assistants can process drafts and send messages. Encourage careful handling of sensitive content.
Review audit logs to see whether Copilot accessed or summarized labeled emails. This helps determine actual exposure rather than assumed risk.
Confirm that confidential labels are configured to block AI processing where required. Misconfigured labels can create gaps even after a bug is fixed.
Because the issue involved Sent Items and Drafts, evaluate whether sensitive drafts should be stored long-term or deleted after sending.
Instead of enabling Copilot organization-wide, consider a phased deployment to departments with lower sensitivity exposure.
Use this moment to reassess how AI tools integrate with compliance controls. Treat it as a learning opportunity rather than a one-time glitch.
Pro Tip: This Copilot bug centers on enterprise controls. Even so, AI tools operate on your devices and accounts, so keeping software up to date and using strong antivirus software adds an important layer of defense. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
Enterprise AI bugs raise a bigger question: how much access should email platforms have to your data in the first place? If you want an added layer of privacy beyond mainstream providers, privacy-focused email services are worth exploring.
Some offer end-to-end encryption, support for PGP encryption and a strict no-ads business model that avoids scanning messages for marketing purposes.
AI WEARABLE HELPS STROKE SURVIVORS SPEAK AGAIN
Many also allow you to create disposable email aliases, which can reduce spam and limit exposure if one address is compromised.
While no provider is immune to software bugs, choosing an email service built around privacy rather than data monetization can limit how much of your information is accessible to automated systems in the first place.
For individuals, journalists and small businesses especially, that added control can make a meaningful difference.
For recommendations on private and secure email providers that offer alias addresses, visit Cyberguy.com
AI assistants are becoming part of daily work life. They promise speed, efficiency and smarter workflows. But convenience should never outrun security.
This Copilot bug may have a limited impact. Still, it serves as a reminder that AI tools are only as strong as the guardrails behind them.
When those guardrails slip, even briefly, sensitive information can move in unexpected ways. As AI becomes more embedded in business software, trust will depend on transparency, fast fixes and clear communication.
Here is the real question: If your AI assistant can see everything you write, are you fully confident it respects every boundary you set? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter
Copyright 2026 CyberGuy.com. All rights reserved.