{"id":18450,"date":"2025-08-07T14:52:01","date_gmt":"2025-08-07T14:52:01","guid":{"rendered":"https:\/\/goteech.io\/?p=18450"},"modified":"2025-11-11T09:04:30","modified_gmt":"2025-11-11T09:04:30","slug":"what-is-rag-retrieval-augmented-generation","status":"publish","type":"post","link":"https:\/\/goteech.io\/zh-hk\/blog\/learn\/what-is-rag-retrieval-augmented-generation\/","title":{"rendered":"A Practical Guide to Retrieval-Augmented Generation"},"content":{"rendered":"<div data-elementor-type=\"wp-post\" data-elementor-id=\"18450\" class=\"elementor elementor-18450\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-cbfad5b elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"cbfad5b\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-85a16e6\" data-id=\"85a16e6\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-eb7eb02 elementor-toc--minimized-on-desktop elementor-widget elementor-widget-table-of-contents\" data-id=\"eb7eb02\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;exclude_headings_by_selector&quot;:&quot;post-recommend, post-recommend-grid&quot;,&quot;marker_view&quot;:&quot;bullets&quot;,&quot;icon&quot;:{&quot;value&quot;:&quot;far fa-circle&quot;,&quot;library&quot;:&quot;fa-regular&quot;},&quot;no_headings_message&quot;:&quot;No headings were found on this page.&quot;,&quot;_animation&quot;:&quot;none&quot;,&quot;minimized_on&quot;:&quot;desktop&quot;,&quot;headings_by_tags&quot;:[&quot;h4&quot;],&quot;minimize_box&quot;:&quot;yes&quot;,&quot;hierarchical_view&quot;:&quot;yes&quot;,&quot;min_height&quot;:{&quot;unit&quot;:&quot;px&quot;,&quot;size&quot;:&quot;&quot;,&quot;sizes&quot;:[]},&quot;min_height_tablet&quot;:{&quot;unit&quot;:&quot;px&quot;,&quot;size&quot;:&quot;&quot;,&quot;sizes&quot;:[]},&quot;min_height_mobile&quot;:{&quot;unit&quot;:&quot;px&quot;,&quot;size&quot;:&quot;&quot;,&quot;sizes&quot;:[]}}\" data-widget_type=\"table-of-contents.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-toc__header\">\n\t\t\t\t\t\t<h4 class=\"elementor-toc__header-title\">\n\t\t\t\t\u5167\u5bb9\u76ee\u9304\t\t\t<\/h4>\n\t\t\t\t\t\t\t\t\t\t<div class=\"elementor-toc__toggle-button elementor-toc__toggle-button--expand\" role=\"button\" tabindex=\"0\" aria-controls=\"elementor-toc__eb7eb02\" aria-expanded=\"true\" aria-label=\"Open table of contents\"><i aria-hidden=\"true\" class=\"fas fa-chevron-down\"><\/i><\/div>\n\t\t\t\t<div class=\"elementor-toc__toggle-button elementor-toc__toggle-button--collapse\" role=\"button\" tabindex=\"0\" aria-controls=\"elementor-toc__eb7eb02\" aria-expanded=\"true\" aria-label=\"Close table of contents\"><i aria-hidden=\"true\" class=\"fas fa-chevron-up\"><\/i><\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<div id=\"elementor-toc__eb7eb02\" class=\"elementor-toc__body\">\n\t\t\t<div class=\"elementor-toc__spinner-container\">\n\t\t\t\t<i class=\"elementor-toc__spinner eicon-animation-spin eicon-loading\" aria-hidden=\"true\"><\/i>\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-adb21d2 elementor-widget elementor-widget-spacer\" data-id=\"adb21d2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\n\t\t<div class=\"elementor-element elementor-element-89c4ab8 elementor-widget elementor-widget-wp-widget-text\" data-id=\"89c4ab8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><p>Retrieval-Augmented Generation (RAG) is a pattern that combines a language model\u2019s generative ability with a retrieval system that fetches relevant documents (from a knowledge base or vector store) and conditions the model\u2019s output on that external context. In short: instead of trusting the model\u2019s stored &#8220;memory&#8221; alone, RAG gives it live access to up-to-date or domain-specific documents so outputs are more factual, auditable, and easier to update.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c410a2a elementor-widget elementor-widget-wp-widget-text\" data-id=\"c410a2a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Why RAG matters today<\/h4>\n<p>Large language models are powerful, but they have limits: their knowledge is frozen at training time, and they can hallucinate (confidently produce wrong facts). RAG addresses these problems by marrying parametric memory (the model\u2019s weights) with non-parametric memory (a searchable document index). That design yields better factuality on knowledge-intensive tasks and makes it easier to provide provenance for an answer. The original RAG paper demonstrated clear gains on several open-domain QA benchmarks.<\/p>\n<p>Vendors and cloud providers now offer RAG patterns and managed services because enterprises need both accuracy and auditability\u2014Microsoft, major LLM providers, and many open-source stacks publish guidance and tooling for RAG pipelines.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-571efac elementor-widget elementor-widget-wp-widget-text\" data-id=\"571efac\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">How RAG works: the components<\/h4>\n<p>A production RAG pipeline typically includes:<\/p>\n<p><b>Ingestion + indexing:<\/b> Documents (PDFs, docs, knowledge base pages) are split, embedded (vectorized), and stored in a vector database (FAISS, Milvus, Pinecone, Chroma, etc.).<\/p>\n<p><b>Retrieval:<\/b> Given a user query, the system converts the query to an embedding and retrieves the top-k most relevant passages.<\/p>\n<p><b>Augmentation &amp; conditioning:<\/b> The retrieved passages are concatenated with the user prompt (or inserted into a structured template) and passed to an LLM.<\/p>\n<p><b>Generation:<\/b> The model generates an answer grounded on the retrieved evidence, often with citations or quoted source snippets.<\/p>\n<p><b>Post-processing \/ human review:<\/b> Answers are optionally checked (confidence thresholds, reranking, or human-in-the-loop review) before returning to the user.<\/p>\n<p>This separation (retriever + generator) is the heart of RAG and explains why it\u2019s flexible: you can swap embeddings, retrievers, or LLM backends independently.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-90c5f07 elementor-widget elementor-widget-wp-widget-text\" data-id=\"90c5f07\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Variants &amp; architecture choices<\/h4>\n<p>The RAG literature and practice offer a few flavors:<\/p>\n<ul>\n<li><b>Sequence-conditioning RAG (original):<\/b> Retrieved passages are concatenated and fed to the generator; the model conditions on them to produce an answer.<\/li>\n<li><b>Per-token retrieval:<\/b> More advanced RAG versions can retrieve different passages as the model generates tokens (more expensive, sometimes more precise).<\/li>\n<li><b>GraphRAG \/ Knowledge-graph augmented RAG:<\/b> Combine RAG with graph structures to handle entity relations and multi-hop reasoning\u2014useful for complex knowledge graphs or contract analysis. Recent evaluations show graph-based RAG outperforms vanilla RAG on some multi-hop tasks.\n<\/ul>\n<p>Each choice affects accuracy, latency, and engineering complexity.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-dbcfe81 elementor-widget elementor-widget-wp-widget-text\" data-id=\"dbcfe81\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">When to use RAG? Rules of thumb)<\/h4>\n<p>Use RAG when you need any of the following:<\/p>\n<ul>\n<li>Fresh or proprietary facts (internal docs, policy text, recent news).<\/li>\n<li>Traceability \/ citations (legal, compliance, healthcare contexts).<\/li>\n<li>Domain specificity where out-of-the-box LLM knowledge is insufficient.<\/li>\n<\/ul>\n<p>If your task is simple, stable, and low-risk, a retrieval-free prompt (fine-tuning or direct prompting) might suffice. But for knowledge-intensive customer support, policy lookup, or QA across internal documents, RAG is often the best path.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6ebf59c elementor-widget elementor-widget-wp-widget-text\" data-id=\"6ebf59c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Practical implementation guide<\/h4>\n<p><b>Start small with a pilot.<\/b> Index a focused corpus (e.g., onboarding docs or product FAQs) and run a Q&amp;A prototype. LangChain and similar frameworks provide fast tutorials and templates to build a RAG agent. <\/p>\n<p><b>Pick embeddings &amp; vector DBs.<\/b> Test a few embedding models for retrieval quality; use a vector DB that matches your scale and SLA needs (Pinecone, Milvus, FAISS, Chroma).<\/p>\n<p><b>Tune retrieval size &amp; prompts.<\/b> Experiment with top-k, passage length, and prompt templates that instruct the LLM to cite sources and prefer retrieved evidence.<\/p>\n<p><b>Add confidence &amp; fallback logic.<\/b> If retrieval score or model confidence is low, escalate to human review or a fallback model.<\/p>\n<p><b>Monitor drift &amp; reindexing cadence.<\/b> New documents should be embedded and indexed promptly to keep the knowledge base fresh.<\/p>\n<p><b>Measure beyond accuracy.<\/b> Track resolution rate, hallucination rate, latency, cost per query, and provenance coverage.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-026e65d elementor-widget elementor-widget-wp-widget-text\" data-id=\"026e65d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Common pitfalls &amp; how to avoid them<\/h4>\n<ul>\n<li><b>Bad retrieval = bad answers.<\/b> Improve retrieval quality through better embeddings, cleaning source docs, and semantic chunking.<\/li>\n<li><b>Prompt length &amp; token limits.<\/b> Concatenating long retrieved passages can exceed model context windows\u2014use summarization or selective passage ranking.<\/li>\n<li><b>Privacy &amp; security risks.<\/b> Indexing sensitive documents requires encryption, RBAC, and strict deletion policies. Don\u2019t accidentally expose PII in retrieved passages.<\/li>\n<li><b>Validation leakage.<\/b>Keep evaluation\/test sets separate from indexed documents to avoid overfitting ensemble or reranking strategies.<\/li>\n<\/ul>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0203346 elementor-widget elementor-widget-wp-widget-text\" data-id=\"0203346\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Advanced ideas: fusion, reranking, and hybrid patterns<\/h4>\n<ul>\n<li><b>Rerankers:<\/b> Run a secondary cross-encoder to rerank retrieved passages before conditioning the generator\u2014ranks often improve final accuracy.<\/li>\n<li><b>Cache &amp; cascade:<\/b> Try a cheap model or retrieval-free pass first; escalate to full RAG for low-confidence queries to balance cost and latency.<\/li>\n<li><b>GraphRAG \/ knowledge graphs:<\/b> Use graph structures to handle entity linking and explainable multi-hop reasoning where simple retrieval fails.<\/li>\n<\/ul>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-48fd8ee elementor-widget elementor-widget-wp-widget-text\" data-id=\"48fd8ee\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Evaluation &amp; metrics<\/h4>\n<p>Measure multiple dimensions:<\/p>\n<ul>\n<li>Exact &amp; semantic accuracy (task dependent).<\/li>\n<li>Hallucination rate (proportion of answers making false claims).<\/li>\n<li>Provenance coverage (fraction of answers that include valid source citations).<\/li>\n<li>Latency &amp; cost (ms and dollars per successful resolution).<\/li>\n<\/ul>\n<p>Benchmarks from the original RAG paper and newer evaluations show RAG improves factuality on many tasks\u2014yet you must validate on your data.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4ce106e elementor-widget elementor-widget-wp-widget-text\" data-id=\"4ce106e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\"><strong>Conclusion \u2014 is RAG right for you?<\/strong><\/h4>\n<p>RAG is now a core pattern for building LLM apps that must be accurate, auditable, and easily updated. It adds engineering complexity (indexing, embedding pipelines, vector DB management) but often repays that cost with better factuality and traceability. Start with a narrow pilot, measure hallucinations and latency, then expand with reranking and governance. The combination of an LLM plus a well-tuned retriever is one of the most practical ways to build trustworthy, knowledge-driven AI experiences today.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d5b7f9c elementor-widget elementor-widget-spacer\" data-id=\"d5b7f9c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-150f1a1 e-grid-align-left elementor-shape-rounded elementor-grid-0 elementor-widget elementor-widget-social-icons\" data-id=\"150f1a1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"social-icons.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-social-icons-wrapper elementor-grid\" role=\"list\">\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-facebook-f elementor-repeater-item-bd158f5\" href=\"https:\/\/www.facebook.com\/\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">Facebook-f<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-facebook-f\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-x-twitter elementor-repeater-item-c81668c\" href=\"http:\/\/x.com\/\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">X-twitter<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-x-twitter\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-linkedin-in elementor-repeater-item-c1bfed6\" href=\"https:\/\/www.linkedin.com\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">Linkedin-in<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-linkedin-in\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-whatsapp elementor-repeater-item-609b641\" href=\"https:\/\/web.whatsapp.com\/\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">Whatsapp<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-whatsapp\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-fb8fcab elementor-widget elementor-widget-spacer\" data-id=\"fb8fcab\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9482c13 align--mobileleft animated-fast align-left elementor-invisible elementor-widget elementor-widget-mae-link\" data-id=\"9482c13\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeInRight&quot;,&quot;_animation_delay&quot;:200}\" data-widget_type=\"mae-link.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\n        <a class=\"master-link  icon-left\" href=\"https:\/\/goteech.io\/zh-hk\/resources\/\" >\n            <span class=\"icon unic unic-arrow-circle-left\"><\/span>            <span>\u8fd4\u56de\u60a8\u7684\u8cc7\u6e90<\/span>\n                    <\/a>\n\n        \t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-66725a6 elementor-widget elementor-widget-spacer\" data-id=\"66725a6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>","protected":false},"excerpt":{"rendered":"<p>\u4e86\u89e3\u4ec0\u9ebc\u662f RAG\uff08\u6aa2\u7d22\u589e\u5f37\u751f\u6210\uff09\u3001\u5176\u904b\u4f5c\u539f\u7406\u8207\u5e38\u898b\u67b6\u69cb\u9078\u9805\uff0c\u4e26\u5faa\u5e8f\u638c\u63e1\u5be6\u4f5c\u6b65\u9a5f\uff0c\u5efa\u7acb\u80fd\u6a19\u793a\u4f86\u6e90\u3001\u6e1b\u5c11\u5e7b\u89ba\u7684 LLM \u61c9\u7528\u3002<\/p>","protected":false},"author":2,"featured_media":18646,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[97],"tags":[],"class_list":["post-18450","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-learn"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts\/18450","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/comments?post=18450"}],"version-history":[{"count":5,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts\/18450\/revisions"}],"predecessor-version":[{"id":18487,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts\/18450\/revisions\/18487"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/media\/18646"}],"wp:attachment":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/media?parent=18450"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/categories?post=18450"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/tags?post=18450"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}