Why Your Next Remote Hire Might Not Be Who You Think
It sounds like something out of a spy movie — but it’s already happening in real life.
What started as a cybersecurity problem mainly affecting companies in the US and Europe has now reached closer to home. Australia has confirmed cases. Microsoft has issued global warnings. And across Asia-Pacific, businesses hiring remote tech talent are increasingly in the spotlight.
If your company hires remote IT staff, developers, or contractors, this is worth paying attention to.
This Isn’t Just a “Somewhere Else” Problem
When people hear about fake job applicants linked to North Korean cyber operations, the assumption is usually the same: that happens overseas.
But the reality has shifted.
Malaysia’s fast-growing tech sector — combined with a strong remote workforce and close ties to multinational industries like oil and gas, aerospace, defence, finance, and technology — makes it an attractive target. Remote hiring has opened huge opportunities for businesses, but it has also opened a new door for people who know how to exploit it.
And the methods being used are surprisingly simple.
What’s Actually Going On?
According to Microsoft’s threat intelligence team, fake remote hiring has evolved into a coordinated, state-backed strategy.
Operatives apply for legitimate remote IT and software roles using entirely fabricated identities. Once hired, salaries are quietly redirected back to support North Korea’s regime. In some reported cases, when companies tried to terminate employment, the individuals threatened to release sensitive company data.
Microsoft has already shut down thousands of accounts connected to these operations, tracked under groups known as Jasper Sleet and Coral Sleet. And the concern isn’t limited to one country — once a tactic works, others quickly copy it. Organised crime groups and other state-linked actors are now watching and learning from the same playbook.
Why AI Makes This Fraud So Much Easier
The worrying part is that these operations don’t rely on secret technology. They use tools that are widely available to everyone.
Fake identities can be built in minutes. AI can generate realistic names, convincing backstories, professional email formats, and even polished profile photos of people who don’t exist.
Interviews can be manipulated. Voice-changing software helps mask accents, while deepfake tools can subtly alter video appearances during remote interviews.
AI helps them stay employed. Emails, reports, and even software code can be generated or assisted by AI — often good enough to avoid suspicion while salaries continue flowing.
Applications are highly targeted. These applicants study job postings carefully and tailor applications perfectly. By the time their CV reaches HR, they already know exactly what the company wants to hear.
This isn’t random spam applying — it’s deliberate and researched.
Why Malaysian Companies Should Care
Malaysia has become a regional hub for remote and hybrid tech talent, which is fantastic for business — but also appealing to bad actors.
Some factors that increase exposure include:
- A growing remote IT and freelance workforce
- Cross-border hiring through platforms like LinkedIn, JobStreet, and Upwork
- Strong English proficiency that makes fake profiles easier to localise
- Close integration with multinational supply chains and sensitive industries
In short, the same strengths that attract global investment can also attract sophisticated fraud attempts.
Traditional Background Checks Still Matter — But They Weren’t Built for This
Pre-employment screening is still valuable. Reference checks, qualification verification, and financial or criminal record checks remain important safeguards.
The challenge is that these processes were designed for a time when creating a fake identity required significant effort.
Today, a determined impostor can quickly produce:
- A convincing CV and employment history
- Professional-looking qualifications
- A realistic LinkedIn profile built over months
- AI-generated headshots and supporting documents
Individually, these can look legitimate. Standard checks alone may not always catch the difference — especially for fully remote roles.
The solution isn’t replacing existing screening. It’s adding extra layers specifically for remote hiring.
Practical Steps Employers Can Take
The good news is you don’t need to redesign your hiring process from scratch. A few adjustments can make a big difference.
Make live video interviews mandatory. Real-time conversations with spontaneous questions are much harder to fake than recorded submissions.
Train hiring managers to notice anomalies. Deepfakes often show subtle visual issues — strange lighting, blurred edges, or slight delays between speech and movement.
Verify identity through multiple channels. Don’t rely on a single document. Combine live identity checks with independent verification where possible.
Pay attention to avoidance behaviour. Repeated excuses to avoid video calls or verification steps should raise questions.
Limit system access early on. Gradual access during probation protects systems if something turns out to be wrong.
Monitor behaviour after onboarding. Unusual working patterns or consistent avoidance of collaboration tools may be early warning signs.
Bring HR and IT security together. This risk sits between people management and cybersecurity — both teams need visibility.
The Bottom Line
Malaysia’s hiring and screening practices are generally strong and professional. For most traditional roles, existing processes work well.
But remote hiring has changed the risk landscape.
These operations are patient, organised, and improving quickly. And they aren’t only targeting large corporations — any company granting remote access to systems or data can become a target.
The Australian cases were a regional wake-up call.
The real question isn’t whether this could happen to your organisation.
It’s whether your current hiring process would recognise it if it did.
