The Role of Artificial Intelligence in the Evolution of Social Engineering

Health-ISAC/Trinity Health Joint Research
Jan 11, 2024, 11:26 AM
Key Judgements
- Publicly accessible AI is expediting evolution in social engineering.
- AI will help create false credibility for malicious actors.
- The risks of successful malign influence operations are increasing with the further development of AI.
- Both large-scale and small-scale social engineering efforts benefit from AI integration.
Executive Summary:
The risk of artificial intelligence is quite a nebulous topic when looked at from a macroscopic lens, but when applied to a specific context such as risk to healthcare, it is possible to glean some notable insights. In a healthcare context, the criminal application of AI is especially concerning due to the potential incorporation of protected health information (PHI) into illicit endeavors, and the unique volatility of public trust in healthcare. As healthcare remains squarely in the center of the political spotlight, the two types of generative AI are being used to push divisive narratives. These are image generators and text generators, that when used in combination with each other, create compelling disinformation.
- Related Resources & News
- Health-ISAC Hacking Healthcare 3-21-2025
- Potential Terror Threat Targeted at Health Sector – AHA & Health-ISAC Joint Threat Bulletin
- New Cybersecurity Policies Could Protect Patient Health Data
- CyberWire Podcast: PHP flaw sparks global attack wave
- Health-ISAC Hacking Healthcare 3-14-2025
- HSCC Aiming to Identify Healthcare Workflow Chokepoints
- New Healthcare Security Benchmark Highlights Key Investment Priorities and Risks
- Are Efforts to Help Secure Rural Hospitals Doing Any Good?
- CISA cuts $10 million annually from ISAC funding for states amid wider cyber cuts
- 2024 Health-ISAC Discussion Based Exercise Series After-Action Report