{"id":1321,"date":"2026-02-16T12:04:52","date_gmt":"2026-02-16T12:04:52","guid":{"rendered":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/"},"modified":"2026-02-16T12:04:52","modified_gmt":"2026-02-16T12:04:52","slug":"the-promptware-kill-chain","status":"publish","type":"post","link":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/","title":{"rendered":"The Promptware Kill Chain"},"content":{"rendered":"<div>\n<p><a href=\"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain.jpg\"><img decoding=\"async\" src=\"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg\" alt=\"The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command &amp; control, lateral movement, action on objective\"><\/a><\/p>\n<p>Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on \u201c<a href=\"https:\/\/simonwillison.net\/2022\/Sep\/12\/prompt-injection\/\">prompt injection<\/a>,\u201d a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term \u201cpromptware.\u201d In a <a href=\"https:\/\/arxiv.org\/abs\/2601.09625\">new paper<\/a>, we, the authors, propose a structured seven-step \u201cpromptware kill chain\u201d to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.<\/p>\n<p>In our model, the promptware kill chain begins with <em>Initial Access<\/em>. This is where the malicious payload enters the AI system. This can happen directly, where an attacker types a malicious prompt into the LLM application, or, far more insidiously, through \u201cindirect prompt injection.\u201d In the indirect attack, the adversary embeds malicious instructions in content that the LLM retrieves (obtains in inference time), such as a web page, an email, or a shared document. As LLMs become multimodal (capable of processing various input types beyond text), this vector expands even further; malicious instructions can now be hidden inside an image or audio file, waiting to be processed by a vision-language model.<\/p>\n<p>The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input\u2014whether it is a system command, a user\u2019s email, or a retrieved document\u2014as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.<\/p>\n<p>But prompt injection is only the <em>Initial Access<\/em> step in a sophisticated, multistage operation that mirrors traditional malware campaigns such as Stuxnet or NotPetya.<\/p>\n<p>Once the malicious instructions are inside material incorporated into the AI\u2019s learning, the attack transitions to <em>Privilege Escalation<\/em>, often referred to as \u201cjailbreaking.\u201d In this phase, the attacker circumvents the safety training and policy guardrails that vendors such as OpenAI or Google have built into their models. Through techniques analogous to social engineering\u2014convincing the model to adopt a persona that ignores rules\u2014to sophisticated adversarial suffixes in the prompt or data, the promptware tricks the model into performing actions it would normally refuse. This is akin to an attacker escalating from a standard user account to administrator privileges in a traditional cyberattack; it unlocks the full capability of the underlying model for malicious use.<\/p>\n<p>Following privilege escalation comes <em>Reconnaissance<\/em>. Here, the attack manipulates the LLM to reveal information about its assets, connected services, and capabilities. This allows the attack to advance autonomously down the kill chain without alerting the victim. Unlike reconnaissance in classical malware, which is performed typically before the initial access, promptware reconnaissance occurs after the initial access and jailbreaking components have already succeeded. Its effectiveness relies entirely on the victim model\u2019s ability to reason over its context, and inadvertently turns that reasoning to the attacker\u2019s advantage.<\/p>\n<p>Fourth: the <em>Persistence<\/em> phase. A transient attack that disappears after one interaction with the LLM application is a nuisance; a persistent one compromises the LLM application for good. Through a variety of mechanisms, promptware embeds itself into the long-term memory of an AI agent or poisons the databases the agent relies on. For instance, a worm could infect a user\u2019s email archive so that every time the AI summarizes past emails, the malicious code is re-executed.<\/p>\n<p>The <em>Command-and-Control (C2)<\/em> stage relies on the established persistence and dynamic fetching of commands by the LLM application in inference time from the internet. While not strictly required to advance the kill chain, this stage enables the promptware to evolve from a static threat with fixed goals and scheme determined at injection time into a controllable trojan whose behavior can be modified by an attacker.<\/p>\n<p>The sixth stage, <em>Lateral Movement<\/em>, is where the attack spreads from the initial victim to other users, devices, or systems. In the rush to give AI agents access to our emails, calendars, and enterprise platforms, we create highways for malware propagation. In a \u201cself-replicating\u201d attack, an infected email assistant is tricked into forwarding the malicious payload to all contacts, spreading the infection like a computer virus. In other cases, an attack might pivot from a calendar invite to controlling smart home devices or exfiltrating data from a connected web browser. The interconnectedness that makes these agents useful is precisely what makes them vulnerable to a cascading failure.<\/p>\n<p>Finally, the kill chain concludes with <em>Actions on Objective<\/em>. The goal of promptware is not just to make a chatbot say something offensive; it is often to achieve tangible malicious outcomes through data exfiltration, financial fraud, or even physical world impact. There are examples of AI <a href=\"https:\/\/crypto.news\/aixbt-agent-hacked-losing-55eth-aixbt-token-drops-2025\/\">agents being manipulated<\/a> into selling cars for a single dollar or <a href=\"https:\/\/crypto.news\/aixbt-agent-hacked-losing-55eth-aixbt-token-drops-2025\/\">transferring cryptocurrency<\/a> to an attacker\u2019s wallet. Most alarmingly, agents with coding capabilities can be tricked into executing arbitrary code, granting the attacker total control over the AI\u2019s underlying system. The outcome of this stage determines the type of malware executed by promptware, including infostealer, spyware, and cryptostealer, among others.<\/p>\n<p>The kill chain was already demonstrated. For example, in the research \u201c<a href=\"https:\/\/arxiv.org\/abs\/2508.12175\">Invitation Is All You Need<\/a>,\u201d attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user\u2019s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren\u2019t demonstrated in this attack.<\/p>\n<p>Similarly, the \u201c<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3719027.3765196\">Here Comes the AI Worm<\/a>\u201d research demonstrated another end-to-end realization of the kill chain. In this case, initial access was achieved via a prompt injected into an email sent to the victim. The prompt employed a role-playing technique to compel the LLM to follow the attacker\u2019s instructions. Since the prompt was embedded in an email, it likewise persisted in the long-term memory of the user\u2019s workspace. The injected prompt instructed the LLM to replicate itself and exfiltrate sensitive user data, leading to off-device lateral movement when the email assistant was later asked to draft new emails. These emails, containing sensitive information, were subsequently sent by the user to additional recipients, resulting in the infection of new clients and a sublinear propagation of the attack. C2 and reconnaissance weren\u2019t demonstrated in this attack.<\/p>\n<p>The promptware kill chain gives us a framework for understanding these and similar attacks; the paper characterizes dozens of them. Prompt injection isn\u2019t something we can fix in current LLM technology. Instead, we need an in-depth defensive strategy that assumes initial access will occur and focuses on breaking the chain at subsequent steps, including by limiting privilege escalation, constraining reconnaissance, preventing persistence, disrupting C2, and restricting the actions an agent is permitted to take. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build.<\/p>\n<p><em>This essay was written with Oleg Brodt, Elad Feldman and Ben Nassi, and originally appeared in <a href=\"https:\/\/www.lawfaremedia.org\/article\/the-promptware-kill-chain\">Lawfare<\/a>.<\/em><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on \u201cprompt injection,\u201d a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"site-container-style":"default","site-container-layout":"default","site-sidebar-layout":"default","disable-article-header":"default","disable-site-header":"default","disable-site-footer":"default","disable-content-area-spacing":"default","footnotes":""},"categories":[4,90,210,99,53],"tags":[91],"class_list":["post-1321","post","type-post","status-publish","format-standard","hentry","category-ai","category-cybersecurity","category-llm","category-malware","category-uncategorized","tag-cybersecurity"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Promptware Kill Chain - Imperative Business Ventures Limited<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Promptware Kill Chain - Imperative Business Ventures Limited\" \/>\n<meta property=\"og:description\" content=\"Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on \u201cprompt injection,\u201d a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/\" \/>\n<meta property=\"og:site_name\" content=\"Imperative Business Ventures Limited\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-16T12:04:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02\"},\"headline\":\"The Promptware Kill Chain\",\"datePublished\":\"2026-02-16T12:04:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/\"},\"wordCount\":1275,\"image\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg\",\"keywords\":[\"Cybersecurity\"],\"articleSection\":[\"AI\",\"Cybersecurity\",\"LLM\",\"Malware\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/\",\"url\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/\",\"name\":\"The Promptware Kill Chain - Imperative Business Ventures Limited\",\"isPartOf\":{\"@id\":\"https:\/\/blog.ibvl.in\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg\",\"datePublished\":\"2026-02-16T12:04:52+00:00\",\"author\":{\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02\"},\"breadcrumb\":{\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage\",\"url\":\"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg\",\"contentUrl\":\"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blog.ibvl.in\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Promptware Kill Chain\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.ibvl.in\/#website\",\"url\":\"https:\/\/blog.ibvl.in\/\",\"name\":\"Imperative Business Ventures Limited\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.ibvl.in\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.ibvl.in\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\/\/blog.ibvl.in\"],\"url\":\"https:\/\/blog.ibvl.in\/index.php\/author\/admin_hcbs9yw6\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Promptware Kill Chain - Imperative Business Ventures Limited","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/","og_locale":"en_US","og_type":"article","og_title":"The Promptware Kill Chain - Imperative Business Ventures Limited","og_description":"Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on \u201cprompt injection,\u201d a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, [&hellip;]","og_url":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/","og_site_name":"Imperative Business Ventures Limited","article_published_time":"2026-02-16T12:04:52+00:00","og_image":[{"url":"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg","type":"","width":"","height":""}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#article","isPartOf":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/"},"author":{"name":"admin","@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02"},"headline":"The Promptware Kill Chain","datePublished":"2026-02-16T12:04:52+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/"},"wordCount":1275,"image":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage"},"thumbnailUrl":"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg","keywords":["Cybersecurity"],"articleSection":["AI","Cybersecurity","LLM","Malware"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/","url":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/","name":"The Promptware Kill Chain - Imperative Business Ventures Limited","isPartOf":{"@id":"https:\/\/blog.ibvl.in\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage"},"image":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage"},"thumbnailUrl":"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg","datePublished":"2026-02-16T12:04:52+00:00","author":{"@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02"},"breadcrumb":{"@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#primaryimage","url":"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg","contentUrl":"https:\/\/www.schneier.com\/wp-content\/uploads\/2026\/02\/promptware-kill-chain-660w.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/blog.ibvl.in\/index.php\/2026\/02\/16\/the-promptware-kill-chain\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.ibvl.in\/"},{"@type":"ListItem","position":2,"name":"The Promptware Kill Chain"}]},{"@type":"WebSite","@id":"https:\/\/blog.ibvl.in\/#website","url":"https:\/\/blog.ibvl.in\/","name":"Imperative Business Ventures Limited","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.ibvl.in\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/55b87b72a56b1bbe9295fe5ef7a20b02","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.ibvl.in\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4d20b2cd313e4417a599678e950e6fb7d4dfa178a72f2b769335a08aaa615aa9?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/blog.ibvl.in"],"url":"https:\/\/blog.ibvl.in\/index.php\/author\/admin_hcbs9yw6\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/posts\/1321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/comments?post=1321"}],"version-history":[{"count":0,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/posts\/1321\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/media?parent=1321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/categories?post=1321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.ibvl.in\/index.php\/wp-json\/wp\/v2\/tags?post=1321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}