Sunday, July 27, 2025

What you should do after a data breach!

 According to cybersecurity experts featured by CNBC, the biggest error individuals or organizations make following a data breach is pretending it didn't happen. Actively avoiding the issue or delaying responses only amplifies risk and leaves affected data exposed to further misuse.

Here are some things you should do instead:
1) Understand exactly what was exposed, how it happened, and who might be affected. Acknowledge the breach immediately, denial only delays recovery.

2) Immediately change passwords on breach accounts and any other using the same or similar credentials. Use strong, unique passwords or a password manager for better protection.

3) Add a second layer of security wherever possible, SMS codes, authenticator apps, or hardware keys. These are all to stop unauthorized access even if a password is compromised.

4) Watch for unusual charges or credit inquiries. If financial data was exposed, consider placing a fraud alert or freezing your credit with major bureaus to prevent new accounts from being opened in your name.

5) Phishers and scammers often exploit breach news. Be skeptical of unsolicited emails, texts, or call claiming to help with the breach.

6) Depending on your location or industry, there may be legal processes for disclosing a breach, especially for businesses. Consult counsel or breach notification guidelines as needed.

Simply hoping a breach goes away is the worst response. Facing reality, swiftly and strategically is your most powerful tool. Change passwords, strengthen your security, monitor for misuse, and stay informed. Acting early means reclaiming control.

Sources: https://www.cnbc.com/2024/07/30/cybersecurity-expert-the-worst-thing-to-do-after-a-data-breach.html

Sunday, July 20, 2025

AI Cybersecurity Flaws

 A BBC investigation highlights how vulnerabilities in AI systems, like microsoft's, Google's, and OpenAI's models, are now being aggressively targeted by hackers. These flaws exist not only in the AI software itself but also in the data pipelines and APIs that support them. In essence, Ai system have become a new frontier for cybercriminals, and traditional security measures aren't always equipped to stop them.

Security Researchers found that malicious actors can: 

1. manipulate AI prompts or inputs to bypass protections, tricking models into disclosing sensitive information.

2. Exploit unsecured APIs and data pipelines, tapping into the flow of data that feeds or results from the model.

3. Use adversarial or poisoned data trackers, injecting malicious input during training or inference to degrade performance or embed backdoors.

These vulnerabilities don't just affect standalone AI tools, they can cripple production systems that rely on generative AI, from content bots to developer assistants.

To protect against these emerging threats, teams should: 

Conduct thorough security assessments of all AI components, including training data, model endpoints, and integration points.

Monitor and sanitize prompt inputs to prevent prompt injection and data exfiltration.

Harden access controls and secure APIs with strong authentication and encryption.

Be ready to patch quickly, as Ai platforms evolve rapidly and new flaws can emerge unexpectedly.

Implement AI-specific logging & detection tools to identify suspicious behavior, such as attempts to extract hidden model content or unusual usage patterns.

Ai is redefining what's possible in technology, but it's also opening new attack surfaces that weren't even on the Cybersecurity map a few ears ago. Organizations looking to leverage AI must now think beyond traditional IT security, embracing a security first mindset that treats AI like any other platform facing threat.

Soure:https://www.bbc.com/news/articles/cgl7e33n1d0o

- Joshua Xiong

Sunday, July 13, 2025

Social Engineering: How Hackers trick people, not just Systems.

 When we think of hackers, we often imagine them breaking through firewalls or exploiting software vulnerabilities. But many of today's most successful cyberattacks don't target computers, but they target people. This tactic is called Social Engineering, and it remains one of the most dangerous and effective tools in a hackers toolkit. Social Engineering uses manipulation and deception to trick people into giving away access, data, or money. It works a lot because the strongest security system can fail if someone inside opens the door. 

This social Engineers try to manipulate people into performing actions or revealing confidential information. Taking advantage of emotions like trust, fear, or urgency to bypass security controls. It often starts with information gathering, like learning employees names, roles, or habits, before crafting a believable story or message. These tactics are through email, phone calls, text messages, social media, or in person interactions.

Some of the Engineering attacks are Phishing, Vishing, Smishing, pretexting, and tailgating. Here is a few way to defend against it. Slow down when interacting with others to catch strange patterns that might give them a way. In interactions through email or text look for bad grammar, or something in the words that may be off. Educating your team on Social Engineering is the biggest help since they are the one's these hackers try to attack. Never sharing any personal or login info with others no matter the urgency is a good way to prevent it.

Technology alone isn't enough to protect us. Hackers know that people are often the weakest link, which is why social engineering remains such a common tactic. The best defense is awareness and vigilance, when we learn to recognize the signs, we make ourselves much harder to fool. By combining smart habits with strong technical strong technical safeguards, we can shut the door on social engineers, before they even get a foot it.

Sources:
https://www.ibm.com/think/topics/social-engineering

- Joshua Xiong

Sunday, July 6, 2025

The cybersecurity Risks of AI tools like ChatGPT and GitHub Copilot

 AI tools like ChatGPT, and Copilot, and other generative models have revolutionized how we work, code, write, and interact with technology. They are fast helpful, and increasingly embedded in business workflows. But with this new wave of productivity comes a new wave of cybersecurity concerns, and most organizations aren't prepared for them yet.

While Ai can be used to strengthen defenses, it also introduces serious data privacy intellectual property, and attack surface risks that both individuals and businesses must understand.

When users enter sensitive information into AI tools, like company secrets, source code, or customer records, it may be stored or processed in ways that violate data protection policies. Even if a tool says it doesn't store prompts, third party plugins or insecure APIs could be exploited. If your company doesn't explicitly control how AI tools are used, data could leak outside your network without any hacking at all.

Tools like Copilot are excellent at speeding up development, but they are not perfect. Studies have shown that AI-generated code may contain security vulnerabilities, like hardcoded credentials, SQL injection flaws, Buffer overflows, and Missing input validation. Without a secure code review process, these vulnerabilities can make it into production code undetected. This creates technical debt and attack vectors.

Even though others are using AI to protect and fix, with the advancements in AI, even attackers are using it too. They could be using it for phishing emails with better grammar, fake branding, and convincing tones. They can impersonate individuals through voice cloning or deep fakes, aiding social engineering attacks. They are helping create AI-written malware and scrips that are growing more sophisticated, making detection harder.

To safely benefit from AI tools while reducing risks you should, Create an AI use policy defining what data is allowed or not in AI prompts, Train employees to know what the privacy and IP concerns are, Review AI coding, And stay up to date on AI cyberattacks and security trends.

AI tools offer incredible benefits, but they are not risk free. As their use grows cybersecurity teams must treat AI as both a powerful asset and potential liability. With the right policies, training, and security controls in place, we can embrace AI innovation without compromising trust or safety.

- Joshua Xiong

Sources:
https://arxiv.org/abs/2108.09293

When Doing Laundry Becomes a Cybersecurity Lesson

 In a twist that reads like a tech startup pitch, and a cybersecurity thriller, two UC Santa Cruz students, Alexander Sherbrooke and Iakov T...