Does LinkedIn have a systemic AI bot problem?
I recently had to disable my LinkedIn notifications because of a constant stream of scammers offering to "maximize my resume potential."
This screenshot is just a glance at my inbox this morning. It is a textbook example of "catfishing" applied to the professional world. Early in my career I saw fake LinkedIn profiles using stock photos of professionally dressed 20-something women to create fake profiles, and now with cheap or free-to-use samples of AI tools, it’s faster, cheaper, and easier to make more realistic fake profiles to get another fraud victim.
These LinkedIn users are building AI bots that create sophisticated fake profiles to harvest your personal information.
I am hearing from more and more friends and colleagues about an increase in fake job postings on Indeed and LinkedIn preying on people who were recently downsized or are struggling to re-enter the market. Many are mid-career trying to figure out what to do next in this brave new world of AI seemingly everywhere you look.
The Human Cost
A contact of mine recently received a "hiring check" via FedEx for a personal assistant role they found on Indeed. There was no interview process. When they declined the "offer" on my advice, the email reply was a barrage of hateful expletives. That isn't a frustrated hiring manager. It is a fraudster losing a lead by sending a check for thousands of $ to get my friend’s bank account info to match the personal info they already gave this fraudster on their resume.
The Platform Challenge
As the job market tightens and companies prioritize bottom-line efficiency, the desperation of job seekers becomes a commodity for scammers.
Perplexity sources validate the macroeconomic trend lines:
https://www.perplexity.ai/search/is-there-a-significant-increas-kvtU.HLiSg.JQCyo2MJAVA
How quickly can the LinkedIn team recognize and mitigate these obvious, repeated patterns? If AI is being used to automate these attacks, LinkedIn must be equally aggressive in using AI tools to recognize the obvious patterns of repeat inbox attacks that they can use to automate protections for its users.
It’s up to people like me and colleagues in my field to make sure we’re countering this malicious use of AI with better AI to more quickly identify obvious patterns and automate protections to prevent these scammers from preying on the next victim.
I hope business owners of platforms like LinkedIn, Facebook, and others being attacked by malicious use AI are ensuring they’re applying AI tools in ethical ways that quickly identify these threats and protect all users, not just Premium ones.
Otherwise, platforms like this should expect the social media engagement metrics to drop and cost platform businesses owners quantifiably valuable users like me, a digital product designer.
They risk revenue gains they profit from as a benefit from valuable user time and attention.

