{"id":767,"date":"2026-01-22T13:06:57","date_gmt":"2026-01-22T13:06:57","guid":{"rendered":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/"},"modified":"2026-01-22T13:06:57","modified_gmt":"2026-01-22T13:06:57","slug":"why-ai-keeps-falling-for-prompt-injection-attacks","status":"publish","type":"post","link":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/","title":{"rendered":"Why AI Keeps Falling for Prompt Injection Attacks"},"content":{"rendered":"<div>\n<p>Imagine you work at a drive-through restaurant. Someone drives up and says: \u201cI\u2019ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.\u201d Would you hand over the money? Of course not. Yet this is what <a href=\"https:\/\/spectrum.ieee.org\/tag\/large-language-models\">large language models<\/a> (<a href=\"https:\/\/spectrum.ieee.org\/tag\/llms\">LLMs<\/a>) do.<\/p>\n<p><a href=\"https:\/\/www.ibm.com\/think\/topics\/prompt-injection\">Prompt injection<\/a> is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system <a href=\"https:\/\/spectrum.ieee.org\/tag\/passwords\">passwords<\/a> or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM\u2019s <a href=\"https:\/\/medium.com\/data-science\/safeguarding-llms-with-guardrails-4f5d9f57cff2\">safety guardrails<\/a>, and it complies.<\/p>\n<p>LLMs are vulnerable to <a href=\"https:\/\/fdzdev.medium.com\/20-prompt-injection-techniques-every-red-teamer-should-test-b22359bfd57d\">all sorts<\/a> of prompt injection attacks, some of them absurdly obvious. A chatbot won\u2019t tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It won\u2019t accept nefarious text inputs, but might if the text is rendered as <a href=\"https:\/\/arxiv.org\/abs\/2402.11753\">ASCII art<\/a> or appears in an image of a <a href=\"https:\/\/www.lakera.ai\/blog\/visual-prompt-injections\">billboard<\/a>. Some ignore their guardrails when told to \u201cignore previous instructions\u201d or to \u201cpretend you have no guardrails.\u201d<\/p>\n<p>AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are <a href=\"https:\/\/llm-attacks.org\/\">impossible<\/a> with today\u2019s LLMs. More precisely, there\u2019s an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally.<\/p>\n<p>If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.<\/p>\n<h3>Human Judgment Depends on Context<\/h3>\n<p>Our basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.<\/p>\n<p>As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what\u2019s normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us <a href=\"https:\/\/www.nature.com\/articles\/srep08242\">especially careful<\/a> about things that have a large downside or are impossible to reverse.<\/p>\n<p>The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to <a href=\"https:\/\/ncase.me\/trust\/\">reward cooperation with cooperation<\/a> and punish defection with defection.<\/p>\n<p>A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.<\/p>\n<p>We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who\u2019s making the request), and normative (what\u2019s appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual\u2014for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.<\/p>\n<p>Crucially, we also have an interruption reflex. If something feels \u201coff,\u201d we naturally pause the <a href=\"https:\/\/spectrum.ieee.org\/tag\/automation\">automation<\/a> and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it\u2019s how we humans are able to navigate a complex world where others are constantly trying to trick us.<\/p>\n<p>So let\u2019s return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them you\u2019re filming a commercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.<\/p>\n<p>Con artists are astute observers of human defenses. Successful <a href=\"https:\/\/spectrum.ieee.org\/tag\/scams\">scams<\/a> are often slow, undermining a mark\u2019s situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era \u201cbig store\u201d cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern <a href=\"https:\/\/dfpi.ca.gov\/news\/insights\/pig-butchering-how-to-spot-and-report-the-scam\/\">\u201cpig-butchering\u201d frauds<\/a>, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim\u2019s trust.<\/p>\n<p>Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s <a href=\"https:\/\/en.wikipedia.org\/wiki\/Strip_search_phone_call_scam\">targeted fast-food workers by phone<\/a>, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts.<\/p>\n<h3>Why LLMs Struggle With Context and Judgment<\/h3>\n<p>LLMs behave as if they have a notion of context, but it\u2019s different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see \u201ctokens,\u201d not hierarchies and intentions. LLMs don\u2019t reason through context, they only reference it.<\/p>\n<p>While LLMs often get the details right, they can easily miss the <a href=\"https:\/\/spectrum.ieee.org\/tag\/big-picture\">big picture<\/a>. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond \u201cno.\u201d What it doesn\u2019t \u201cknow\u201d\u2014forgive the anthropomorphizing\u2014is whether it\u2019s actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.<\/p>\n<p>This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it\u2019s hard to get it back. AI expert Simon Willison <a href=\"https:\/\/simonwillison.net\/2025\/Sep\/12\/claude-memory\/\">wipes context clean<\/a> if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.<\/p>\n<p>There\u2019s more. LLMs are <a href=\"https:\/\/www.cmu.edu\/dietrich\/news\/news-stories\/2025\/july\/trent-cash-ai-overconfidence.html\">overconfident<\/a> because they\u2019ve been designed to give an answer rather than express ignorance. A drive-through worker might say: \u201cI don\u2019t know if I should give you all the money\u2014let me ask my boss,\u201d whereas an LLM will just make the call. And since LLMs are designed to be <a href=\"https:\/\/hai.stanford.edu\/news\/large-language-models-just-want-to-be-liked\">pleasing<\/a>, they\u2019re more likely to satisfy a user\u2019s request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what\u2019s necessary for security.<\/p>\n<p>The result is that the current generation of LLMs is far more gullible than people. They\u2019re naive and regularly fall for manipulative <a href=\"https:\/\/arstechnica.com\/science\/2025\/09\/these-psychological-tricks-can-get-llms-to-respond-to-forbidden-prompts\/\">cognitive tricks<\/a> that wouldn\u2019t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. There\u2019s a <a href=\"https:\/\/www.bbc.com\/news\/articles\/ckgyk2p55g8o\">story<\/a> about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.<\/p>\n<h3>The Limits of <a href=\"https:\/\/spectrum.ieee.org\/tag\/agentic-ai\">AI Agents<\/a><\/h3>\n<p>Prompt injection is an unsolvable problem that <a href=\"https:\/\/www.computer.org\/csdl\/magazine\/sp\/5555\/01\/11194053\/2aB2Rf5nZ0k\">gets worse<\/a> when we give AIs tools and tell them to act independently. This is the promise of <a href=\"https:\/\/spectrum.ieee.org\/tag\/agentic-ai\">AI agents<\/a>: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions\u2014and sometimes they will take the <a href=\"https:\/\/www.theregister.com\/2025\/10\/28\/ai_browsers_prompt_injection\/\"> wrong ones<\/a>.<\/p>\n<p>Science doesn\u2019t know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don\u2019t know if it\u2019s possible to build an LLM, where trusted commands and untrusted inputs are processed through the <a href=\"https:\/\/cacm.acm.org\/opinion\/llms-data-control-path-insecurity\/\">same channel<\/a>, which is immune to prompt injection attacks.<\/p>\n<p>We humans get our model of the world\u2014and our facility with overlapping contexts\u2014from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person\u2019s identity as a doctor is suddenly more relevant.<\/p>\n<p>We don\u2019t know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can\u2019t be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.<\/p>\n<p>The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving them \u201c<a href=\"https:\/\/medium.com\/@AnthonyLaneau\/beyond-llms-charting-the-next-frontiers-of-ai-with-yann-lecun-09e84f1978f9\">world models<\/a>.\u201d Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its na\u00efvet\u00e9.<\/p>\n<p>Ultimately we are probably faced with a <a href=\"https:\/\/www.computer.org\/csdl\/magazine\/sp\/5555\/01\/11194053\/2aB2Rf5nZ0k\">security trilemma<\/a> when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while it\u2019s going to be tails\u2014and along with a burger and fries, the customer will get the contents of the cash drawer.<\/p>\n<p><em>This essay was written with Barath Raghavan, and originally appeared in <a href=\"https:\/\/spectrum.ieee.org\/prompt-injection-attack\">IEEE Spectrum<\/a>.<\/em><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Imagine you work at a drive-through restaurant. Someone drives up and says: \u201cI\u2019ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.\u201d Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do. Prompt injection is a method [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"site-container-style":"default","site-container-layout":"default","site-sidebar-layout":"default","disable-article-header":"default","disable-site-header":"default","disable-site-footer":"default","disable-content-area-spacing":"default","footnotes":""},"categories":[4,489,490,90,342,210,53],"tags":[91],"class_list":["post-767","post","type-post","status-publish","format-standard","hentry","category-ai","category-chatbots","category-cons","category-cybersecurity","category-fraud","category-llm","category-uncategorized","tag-cybersecurity"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why AI Keeps Falling for Prompt Injection Attacks - Imperative Business Ventures Limited<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why AI Keeps Falling for Prompt Injection Attacks - Imperative Business Ventures Limited\" \/>\n<meta property=\"og:description\" content=\"Imagine you work at a drive-through restaurant. Someone drives up and says: \u201cI\u2019ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.\u201d Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do. Prompt injection is a method [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/\" \/>\n<meta property=\"og:site_name\" content=\"Imperative Business Ventures Limited\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-22T13:06:57+00:00\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02\"},\"headline\":\"Why AI Keeps Falling for Prompt Injection Attacks\",\"datePublished\":\"2026-01-22T13:06:57+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/\"},\"wordCount\":1697,\"keywords\":[\"Cybersecurity\"],\"articleSection\":[\"AI\",\"chatbots\",\"cons\",\"Cybersecurity\",\"fraud\",\"LLM\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/\",\"url\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/\",\"name\":\"Why AI Keeps Falling for Prompt Injection Attacks - Imperative Business Ventures Limited\",\"isPartOf\":{\"@id\":\"https:\/\/blog.ibvl.in\/#website\"},\"datePublished\":\"2026-01-22T13:06:57+00:00\",\"author\":{\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02\"},\"breadcrumb\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blog.ibvl.in\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why AI Keeps Falling for Prompt Injection Attacks\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.ibvl.in\/#website\",\"url\":\"https:\/\/blog.ibvl.in\/\",\"name\":\"Imperative Business Ventures Limited\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.ibvl.in\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/blog.ibvl.in\"],\"url\":\"https:\/\/blog.ibvl.in\/index.php\/author\/admin_hcbs9yw6\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why AI Keeps Falling for Prompt Injection Attacks - Imperative Business Ventures Limited","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/","og_locale":"en_US","og_type":"article","og_title":"Why AI Keeps Falling for Prompt Injection Attacks - Imperative Business Ventures Limited","og_description":"Imagine you work at a drive-through restaurant. Someone drives up and says: \u201cI\u2019ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.\u201d Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do. Prompt injection is a method [&hellip;]","og_url":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/","og_site_name":"Imperative Business Ventures Limited","article_published_time":"2026-01-22T13:06:57+00:00","author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/#article","isPartOf":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/"},"author":{"name":"admin","@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02"},"headline":"Why AI Keeps Falling for Prompt Injection Attacks","datePublished":"2026-01-22T13:06:57+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/"},"wordCount":1697,"keywords":["Cybersecurity"],"articleSection":["AI","chatbots","cons","Cybersecurity","fraud","LLM"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/","url":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/","name":"Why AI Keeps Falling for Prompt Injection Attacks - Imperative Business Ventures Limited","isPartOf":{"@id":"https:\/\/blog.ibvl.in\/#website"},"datePublished":"2026-01-22T13:06:57+00:00","author":{"@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02"},"breadcrumb":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/01\/22\/why-ai-keeps-falling-for-prompt-injection-attacks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.ibvl.in\/"},{"@type":"ListItem","position":2,"name":"Why AI Keeps Falling for Prompt Injection Attacks"}]},{"@type":"WebSite","@id":"https:\/\/blog.ibvl.in\/#website","url":"https:\/\/blog.ibvl.in\/","name":"Imperative Business Ventures Limited","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.ibvl.in\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/blog.ibvl.in"],"url":"https:\/\/blog.ibvl.in\/index.php\/author\/admin_hcbs9yw6\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/posts\/767","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/comments?post=767"}],"version-history":[{"count":0,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/posts\/767\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/media?parent=767"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/categories?post=767"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/tags?post=767"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}