Friday, August 8, 2025

When Doing Laundry Becomes a Cybersecurity Lesson

 In a twist that reads like a tech startup pitch, and a cybersecurity thriller, two UC Santa Cruz students, Alexander Sherbrooke and Iakov Tarenenko, discovered a serious security flaw in CSC Service Works IoT laundry machines. By reverse engineering the mobile apps API, they exposed a loophole that allowed them to run machines without paying and even top up their laundry accounts with multi-million dollar balances.

What's striking and alarming, is how easily they could have been exploited beyond a harmless prank. Since CSC servers blindly trusted commands purportedly sent by the app, the students were able to directly manipulate machine behavior. While a physical button still needed to be pressed for a cycle to start, the vulnerability exposed the lack of backend checks and raised serious concerns about connected, heavy duty appliances operating via the internet.

This bug shines a light on the often overlooked risks of IoT devices, especially everyday items like washing machines. It underscores the importance of ethical disclosures, responsiveness from vendors, and robust security design. Thankfully, after the vulnerability went public, CSC finally acknowledged the issue and began patches, showing how transparency and prompt actions are vital in keeping us all safer.

By: Joshua Xiong

Sources: https://www.theverge.com/2024/5/19/24160383/students-security-bug-laundry-machines-csc-serviceworks

Sunday, August 3, 2025

AI Generated Profile Pics: Romance Scam on Dating apps

 Hello guys, this is an interesting topic that I came across over the week.

Scammers are now using AI-generated Profile Pictures to populate fake profiles on dating apps, making romance scams harder to spot and more convincing than ever. This trend is apart of evolving "pig butchering" schemes, where criminals lure victims into imaginary relationships to eventually extract money. Often via cryptocurrency.

To protect yourself, stay alert for red flags like someone avoiding video calls, sounding too perfect, or trying to move the conversation off the app quickly. Ask questions that AI bots may struggle to answer. Use caution when profiles look too polished or overly generic. Dating platforms are beginning to crack down on by adding ID verification features, but users still play the first line of defense.

I found this quite alarming, as how AI has shifted the balance of power in online deception. I usually write about how to keep yourself safe, but as of the moment this feels more like a personally interesting blog. Scammers no longer need to steal photos or write fake bios themselves, in which they use the power of AI, which can generate perfect looking people and flawless conversations in seconds. Its practically catfishing but a lot worst. This just only shows how Ai isn't just a tool for productivity, its also being weaponized to manipulate emotions and exploit human trust in completely new ways.

Source: https://www.bloomberg.com/news/newsletters/2024-02-14/scammers-litter-dating-apps-with-ai-generated-profile-pics

by : Joshua Xiong

Sunday, July 27, 2025

What you should do after a data breach!

 According to cybersecurity experts featured by CNBC, the biggest error individuals or organizations make following a data breach is pretending it didn't happen. Actively avoiding the issue or delaying responses only amplifies risk and leaves affected data exposed to further misuse.

Here are some things you should do instead:
1) Understand exactly what was exposed, how it happened, and who might be affected. Acknowledge the breach immediately, denial only delays recovery.

2) Immediately change passwords on breach accounts and any other using the same or similar credentials. Use strong, unique passwords or a password manager for better protection.

3) Add a second layer of security wherever possible, SMS codes, authenticator apps, or hardware keys. These are all to stop unauthorized access even if a password is compromised.

4) Watch for unusual charges or credit inquiries. If financial data was exposed, consider placing a fraud alert or freezing your credit with major bureaus to prevent new accounts from being opened in your name.

5) Phishers and scammers often exploit breach news. Be skeptical of unsolicited emails, texts, or call claiming to help with the breach.

6) Depending on your location or industry, there may be legal processes for disclosing a breach, especially for businesses. Consult counsel or breach notification guidelines as needed.

Simply hoping a breach goes away is the worst response. Facing reality, swiftly and strategically is your most powerful tool. Change passwords, strengthen your security, monitor for misuse, and stay informed. Acting early means reclaiming control.

Sources: https://www.cnbc.com/2024/07/30/cybersecurity-expert-the-worst-thing-to-do-after-a-data-breach.html

Sunday, July 20, 2025

AI Cybersecurity Flaws

 A BBC investigation highlights how vulnerabilities in AI systems, like microsoft's, Google's, and OpenAI's models, are now being aggressively targeted by hackers. These flaws exist not only in the AI software itself but also in the data pipelines and APIs that support them. In essence, Ai system have become a new frontier for cybercriminals, and traditional security measures aren't always equipped to stop them.

Security Researchers found that malicious actors can: 

1. manipulate AI prompts or inputs to bypass protections, tricking models into disclosing sensitive information.

2. Exploit unsecured APIs and data pipelines, tapping into the flow of data that feeds or results from the model.

3. Use adversarial or poisoned data trackers, injecting malicious input during training or inference to degrade performance or embed backdoors.

These vulnerabilities don't just affect standalone AI tools, they can cripple production systems that rely on generative AI, from content bots to developer assistants.

To protect against these emerging threats, teams should: 

Conduct thorough security assessments of all AI components, including training data, model endpoints, and integration points.

Monitor and sanitize prompt inputs to prevent prompt injection and data exfiltration.

Harden access controls and secure APIs with strong authentication and encryption.

Be ready to patch quickly, as Ai platforms evolve rapidly and new flaws can emerge unexpectedly.

Implement AI-specific logging & detection tools to identify suspicious behavior, such as attempts to extract hidden model content or unusual usage patterns.

Ai is redefining what's possible in technology, but it's also opening new attack surfaces that weren't even on the Cybersecurity map a few ears ago. Organizations looking to leverage AI must now think beyond traditional IT security, embracing a security first mindset that treats AI like any other platform facing threat.

Soure:https://www.bbc.com/news/articles/cgl7e33n1d0o

- Joshua Xiong

Sunday, July 13, 2025

Social Engineering: How Hackers trick people, not just Systems.

 When we think of hackers, we often imagine them breaking through firewalls or exploiting software vulnerabilities. But many of today's most successful cyberattacks don't target computers, but they target people. This tactic is called Social Engineering, and it remains one of the most dangerous and effective tools in a hackers toolkit. Social Engineering uses manipulation and deception to trick people into giving away access, data, or money. It works a lot because the strongest security system can fail if someone inside opens the door. 

This social Engineers try to manipulate people into performing actions or revealing confidential information. Taking advantage of emotions like trust, fear, or urgency to bypass security controls. It often starts with information gathering, like learning employees names, roles, or habits, before crafting a believable story or message. These tactics are through email, phone calls, text messages, social media, or in person interactions.

Some of the Engineering attacks are Phishing, Vishing, Smishing, pretexting, and tailgating. Here is a few way to defend against it. Slow down when interacting with others to catch strange patterns that might give them a way. In interactions through email or text look for bad grammar, or something in the words that may be off. Educating your team on Social Engineering is the biggest help since they are the one's these hackers try to attack. Never sharing any personal or login info with others no matter the urgency is a good way to prevent it.

Technology alone isn't enough to protect us. Hackers know that people are often the weakest link, which is why social engineering remains such a common tactic. The best defense is awareness and vigilance, when we learn to recognize the signs, we make ourselves much harder to fool. By combining smart habits with strong technical strong technical safeguards, we can shut the door on social engineers, before they even get a foot it.

Sources:
https://www.ibm.com/think/topics/social-engineering

- Joshua Xiong

Sunday, July 6, 2025

The cybersecurity Risks of AI tools like ChatGPT and GitHub Copilot

 AI tools like ChatGPT, and Copilot, and other generative models have revolutionized how we work, code, write, and interact with technology. They are fast helpful, and increasingly embedded in business workflows. But with this new wave of productivity comes a new wave of cybersecurity concerns, and most organizations aren't prepared for them yet.

While Ai can be used to strengthen defenses, it also introduces serious data privacy intellectual property, and attack surface risks that both individuals and businesses must understand.

When users enter sensitive information into AI tools, like company secrets, source code, or customer records, it may be stored or processed in ways that violate data protection policies. Even if a tool says it doesn't store prompts, third party plugins or insecure APIs could be exploited. If your company doesn't explicitly control how AI tools are used, data could leak outside your network without any hacking at all.

Tools like Copilot are excellent at speeding up development, but they are not perfect. Studies have shown that AI-generated code may contain security vulnerabilities, like hardcoded credentials, SQL injection flaws, Buffer overflows, and Missing input validation. Without a secure code review process, these vulnerabilities can make it into production code undetected. This creates technical debt and attack vectors.

Even though others are using AI to protect and fix, with the advancements in AI, even attackers are using it too. They could be using it for phishing emails with better grammar, fake branding, and convincing tones. They can impersonate individuals through voice cloning or deep fakes, aiding social engineering attacks. They are helping create AI-written malware and scrips that are growing more sophisticated, making detection harder.

To safely benefit from AI tools while reducing risks you should, Create an AI use policy defining what data is allowed or not in AI prompts, Train employees to know what the privacy and IP concerns are, Review AI coding, And stay up to date on AI cyberattacks and security trends.

AI tools offer incredible benefits, but they are not risk free. As their use grows cybersecurity teams must treat AI as both a powerful asset and potential liability. With the right policies, training, and security controls in place, we can embrace AI innovation without compromising trust or safety.

- Joshua Xiong

Sources:
https://arxiv.org/abs/2108.09293

Saturday, June 28, 2025

Why Regular Security Audits Are Crucial For Business

 In Todays constatly evolving cyber threat landscape, it's no longer enough for organizations to install firewalls and antivirus software and call it a day. Cyberattacks are growing more complex, regulations are tightening, and even one security gap can lead to major data b reaches or legal consequences. That's why regular security audits have become an essential part of modern cybersecurity strategy.

A security audit is a comprehensive review and evaluations of an organizations IT infrastructure, policies, and procedures to identify vulnerabilities, misconfigurations, and non-compliance issues. It can include reviewing firewalls, access controls, data protection protocols, employee practices, and much more.

Why do Security Audits Matter:

  1. Uncover Hidden Vulnerabilities:
    Cybercriminals are always probing for weaknesses. A thorough audit can catch vulnerabilities—like outdated software, exposed ports, or overly permissive access rights—before attackers do.

  2. Ensure Regulatory Compliance:
    Many industries must follow strict laws regarding data protection. Regular audits help demonstrate compliance with frameworks like NIST, ISO 27001, or HIPAA, helping you avoid fines and legal issues.

  3. Prevent Data Breaches:
    According to IBM’s 2023 Data Breach Report, the average cost of a breach is over $4.45 million. Audits can identify areas of risk and provide the guidance needed to improve defenses and protect sensitive customer data.

  4. Boost Customer and Stakeholder Trust:
    When clients know your company takes security seriously—through certifications or audit reports—it builds trust and credibility.

  5. Improve Internal Processes:
    Security audits don’t just reveal flaws—they can help streamline processes, improve employee training, and promote a proactive security culture across departments.

Source: https://auditboard.com/blog/what-is-security-audit

When Doing Laundry Becomes a Cybersecurity Lesson

 In a twist that reads like a tech startup pitch, and a cybersecurity thriller, two UC Santa Cruz students, Alexander Sherbrooke and Iakov T...