{"id":18512,"date":"2025-11-07T18:48:10","date_gmt":"2025-11-07T18:48:10","guid":{"rendered":"https:\/\/goteech.io\/?p=18512"},"modified":"2025-11-11T07:07:19","modified_gmt":"2025-11-11T07:07:19","slug":"ai-hallucinations-trusted-search-results","status":"publish","type":"post","link":"https:\/\/goteech.io\/zh-hk\/blog\/learn\/ai-hallucinations-trusted-search-results\/","title":{"rendered":"AI Hallucinations 101: The Challenge and Solutions"},"content":{"rendered":"<div data-elementor-type=\"wp-post\" data-elementor-id=\"18512\" class=\"elementor elementor-18512\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-cbfad5b elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"cbfad5b\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-85a16e6\" data-id=\"85a16e6\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-eb7eb02 elementor-toc--minimized-on-desktop elementor-widget elementor-widget-table-of-contents\" data-id=\"eb7eb02\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;exclude_headings_by_selector&quot;:&quot;post-recommend, post-recommend-grid&quot;,&quot;marker_view&quot;:&quot;bullets&quot;,&quot;icon&quot;:{&quot;value&quot;:&quot;far fa-circle&quot;,&quot;library&quot;:&quot;fa-regular&quot;},&quot;no_headings_message&quot;:&quot;No headings were found on this page.&quot;,&quot;_animation&quot;:&quot;none&quot;,&quot;minimized_on&quot;:&quot;desktop&quot;,&quot;headings_by_tags&quot;:[&quot;h4&quot;],&quot;minimize_box&quot;:&quot;yes&quot;,&quot;hierarchical_view&quot;:&quot;yes&quot;,&quot;min_height&quot;:{&quot;unit&quot;:&quot;px&quot;,&quot;size&quot;:&quot;&quot;,&quot;sizes&quot;:[]},&quot;min_height_tablet&quot;:{&quot;unit&quot;:&quot;px&quot;,&quot;size&quot;:&quot;&quot;,&quot;sizes&quot;:[]},&quot;min_height_mobile&quot;:{&quot;unit&quot;:&quot;px&quot;,&quot;size&quot;:&quot;&quot;,&quot;sizes&quot;:[]}}\" data-widget_type=\"table-of-contents.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-toc__header\">\n\t\t\t\t\t\t<h4 class=\"elementor-toc__header-title\">\n\t\t\t\t\u5167\u5bb9\u76ee\u9304\t\t\t<\/h4>\n\t\t\t\t\t\t\t\t\t\t<div class=\"elementor-toc__toggle-button elementor-toc__toggle-button--expand\" role=\"button\" tabindex=\"0\" aria-controls=\"elementor-toc__eb7eb02\" aria-expanded=\"true\" aria-label=\"Open table of contents\"><i aria-hidden=\"true\" class=\"fas fa-chevron-down\"><\/i><\/div>\n\t\t\t\t<div class=\"elementor-toc__toggle-button elementor-toc__toggle-button--collapse\" role=\"button\" tabindex=\"0\" aria-controls=\"elementor-toc__eb7eb02\" aria-expanded=\"true\" aria-label=\"Close table of contents\"><i aria-hidden=\"true\" class=\"fas fa-chevron-up\"><\/i><\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<div id=\"elementor-toc__eb7eb02\" class=\"elementor-toc__body\">\n\t\t\t<div class=\"elementor-toc__spinner-container\">\n\t\t\t\t<i class=\"elementor-toc__spinner eicon-animation-spin eicon-loading\" aria-hidden=\"true\"><\/i>\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-adb21d2 elementor-widget elementor-widget-spacer\" data-id=\"adb21d2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\n\t\t<div class=\"elementor-element elementor-element-89c4ab8 elementor-widget elementor-widget-wp-widget-text\" data-id=\"89c4ab8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><p>AI hallucinations are plausible-sounding but incorrect outputs from large language models (LLMs). They arise for statistical and incentive reasons, and the best practical defenses combine retrieval-grounding (RAG), transparent provenance\/citations, uncertainty-aware behavior, and automated + human verification.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-c410a2a elementor-widget elementor-widget-wp-widget-text\" data-id=\"c410a2a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">What is an AI hallucination?<\/h4>\n<p>An AI hallucination occurs when a generative model (like an LLM) produces confident, fluent text that is factually wrong, unsupported, or fabricated. This isn&#8217;t just \u201csmall errors\u201d \u2014 hallucinations can be invented facts, fake citations, or incorrect numeric claims that look\u2014and read\u2014authoritative. Clear definitions and examples help teams design safeguards and measure risk.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-571efac elementor-widget elementor-widget-wp-widget-text\" data-id=\"571efac\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Why LLMs hallucinate (short, actionable explanation)<\/h4>\n<p>Researchers have identified two tight reasons why hallucinations persist:<\/p>\n<p><b>Statistical pattern completion:<\/b> LLMs are trained to predict likely token sequences, not to verify facts. When evidence is missing, they often \u201cfill in\u201d plausible-sounding answers.<\/p>\n<p><b>Evaluation &amp; training incentives: <\/b>Current benchmarks and reward signals commonly reward producing answers rather than admitting uncertainty, so models learn that guessing often improves apparent performance. These structural pressures make hallucinations difficult to eliminate purely by scaling or tweaking models.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-90c5f07 elementor-widget elementor-widget-wp-widget-text\" data-id=\"90c5f07\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">How researchers measure truthfulness<\/h4>\n<p>Benchmarks such as TruthfulQA were created specifically to expose when models echo human falsehoods and misconceptions rather than give correct answers. Benchmarks like this highlight where models produce confident but false outputs and provide quantitative baselines for improvement. Use these or internal test suites to track hallucination rates for your models.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-120cd0c elementor-widget elementor-widget-wp-widget-text\" data-id=\"120cd0c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">The most effective engineering pattern: Retrieval-Augmented Generation (RAG)<\/h4>\n<p><b>What RAG does:<\/b> RAG systems retrieve text snippets from an indexed knowledge base (documents, databases, web pages) and then condition the LLM\u2019s output on those retrieved documents. In other words, the model generates answers grounded in explicit sources rather than relying solely on its internal memory. Empirical and theoretical work shows RAG reduces hallucinations for knowledge-intensive tasks.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4e1c2d7 elementor-widget elementor-widget-wp-widget-text\" data-id=\"4e1c2d7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Practical tips when implementing RAG<\/h4>\n<p>Beyond basic accuracy, track:<\/p>\n<ul>\n<li><b>Curate the index:<\/b> Only index high-quality, authoritative sources (official docs, peer-reviewed research, reputable publishers).<\/li>\n<li><b>Return snippets with citations:<\/b> Present the model\u2019s answer alongside the exact supporting snippet and a link\/reference for verification.<\/li>\n<li><b>Use retrieval scoring thresholds:<\/b> If top retrievals are low-confidence, prompt the system to abstain or ask for clarification instead of fabricating an answer.<\/li>\n<li><b>Keep the index fresh:<\/b> Periodically re-index authoritative sources to reduce stale or out-of-date answers.<\/li>\n<\/ul>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2c5e734 elementor-widget elementor-widget-wp-widget-text\" data-id=\"2c5e734\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Four more pragmatic mitigations you can deploy today<\/h4>\n<p><b>Display provenance and citations.<\/b> Always show the source(s) used to form each factual claim so users can verify. Provenance increases user trust and helps catch hallucinations early.<\/p>\n<p><b>Calibrated uncertainty and safe abstention.<\/b> Train or fine-tune scoring so the system can say \u201cI don\u2019t know\u201d or \u201cI can\u2019t verify that\u201d when evidence is weak\u2014this lowers risk in high-stakes contexts.<\/p>\n<p><b>Automated fact-checking pipelines.<\/b> After generation, rerun key claims through dedicated fact-check models or search queries and flag\/substitute answers that don\u2019t hold up.<\/p>\n<p><b>Human-in-the-loop for high-risk outputs.<\/b> Route health, legal, financial, or regulatory content to expert review before publishing or automating action.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9a49328 elementor-widget elementor-widget-wp-widget-text\" data-id=\"9a49328\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Prompting and UX patterns that reduce hallucination risk<\/h4>\n<ul>\n<li><b>Ask for citations up front.<\/b> \u201cAnswer using only the documents provided; list sources for every factual claim.\u201d<\/li>\n<li><b>Break large tasks into verifiable parts.<\/b> Ask the model to produce short, source-backed steps rather than one long unsourced narrative.<\/li>\n<li><b>Surface uncertainty.<\/b> Use prompt templates that require the model to return a confidence score or an \u201cevidence list\u201d for claimed facts.<\/li>\n<\/ul>\n<p>These patterns pair especially well with RAG and a retrieval layer that supplies concrete evidence to the generator.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a3383b3 elementor-widget elementor-widget-wp-widget-text\" data-id=\"a3383b3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Monitoring, metrics and governance<\/h4>\n<ul>\n<li>Track hallucination rate using benchmarks (e.g., TruthfulQA) and in-domain tests. Measure both false positives (fabricated claims) and unsupported specifics (invented dates, sample names, fake citations).<\/li>\n<li>Adopt SLA-style guarantees for different content classes (e.g., \u201cautomated answers for FAQs only; human review required for medical\/legal outputs\u201d).<\/li>\n<li>Audit logs &amp; provenance trails so reviewers can replay where an answer came from and why the system decided to answer.<\/li>\n<\/ul>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-967494e elementor-widget elementor-widget-wp-widget-text\" data-id=\"967494e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Limitations: what to expect in practice<\/h4>\n<p>Even with RAG, uncertainty-aware modeling, and human review, hallucinations are unlikely to disappear entirely. Recent research argues that some level of hallucination is structurally tied to how we train and evaluate LLMs; success requires socio-technical fixes (changing benchmarks, reward incentives) as well as engineering safeguards. Treat mitigation as risk management, not a one-time fix.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-65f0e2a elementor-widget elementor-widget-wp-widget-text\" data-id=\"65f0e2a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Quick checklist for building a \u201ctrusted search\u201d workflow<\/h4>\n<ol>\n<li>Index quality sources (authoritative, up-to-date).<\/li>\n<li>Use retrieval + generation (RAG-style) to ground answers.<\/li>\n<li>Require provenance \u2014 always display citations\/snippets.<\/li>\n<li>Calibrate model uncertainty and encourage abstention where evidence is weak.<\/li>\n<li>Add verification: automated fact-checkers or human review for high-risk domains.<\/li>\n<\/ol>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-026e65d elementor-widget elementor-widget-wp-widget-text\" data-id=\"026e65d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"wp-widget-text.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"textwidget\"><h4 style=\"margin-bottom: 12px\">Final takeaway<\/h4>\n<p>AI hallucinations are a fundamental reliability challenge for generative systems, but they are manageable. The strongest current approach combines grounding (RAG), transparent provenance, uncertainty-aware evaluation, and verification workflows. For any use where accuracy matters, design systems so answers are tied to verifiable sources and err on the side of abstention when evidence is weak \u2014 and keep humans involved where mistakes matter most.<\/p>\n<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d5b7f9c elementor-widget elementor-widget-spacer\" data-id=\"d5b7f9c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-150f1a1 e-grid-align-left elementor-shape-rounded elementor-grid-0 elementor-widget elementor-widget-social-icons\" data-id=\"150f1a1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"social-icons.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-social-icons-wrapper elementor-grid\" role=\"list\">\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-facebook-f elementor-repeater-item-bd158f5\" href=\"https:\/\/www.facebook.com\/\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">Facebook-f<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-facebook-f\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-x-twitter elementor-repeater-item-c81668c\" href=\"http:\/\/x.com\/\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">X-twitter<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-x-twitter\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-linkedin-in elementor-repeater-item-c1bfed6\" href=\"https:\/\/www.linkedin.com\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">Linkedin-in<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-linkedin-in\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t\t\t<span class=\"elementor-grid-item\" role=\"listitem\">\n\t\t\t\t\t<a class=\"elementor-icon elementor-social-icon elementor-social-icon-whatsapp elementor-repeater-item-609b641\" href=\"https:\/\/web.whatsapp.com\/\" target=\"_blank\">\n\t\t\t\t\t\t<span class=\"elementor-screen-only\">Whatsapp<\/span>\n\t\t\t\t\t\t<i aria-hidden=\"true\" class=\"fab fa-whatsapp\"><\/i>\t\t\t\t\t<\/a>\n\t\t\t\t<\/span>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-fb8fcab elementor-widget elementor-widget-spacer\" data-id=\"fb8fcab\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9482c13 align--mobileleft animated-fast align-left elementor-invisible elementor-widget elementor-widget-mae-link\" data-id=\"9482c13\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;_animation&quot;:&quot;fadeInRight&quot;,&quot;_animation_delay&quot;:200}\" data-widget_type=\"mae-link.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\n        <a class=\"master-link  icon-left\" href=\"https:\/\/goteech.io\/zh-hk\/resources\/\" >\n            <span class=\"icon unic unic-arrow-circle-left\"><\/span>            <span>\u8fd4\u56de\u60a8\u7684\u8cc7\u6e90<\/span>\n                    <\/a>\n\n        \t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-66725a6 elementor-widget elementor-widget-spacer\" data-id=\"66725a6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"spacer.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-spacer\">\n\t\t\t<div class=\"elementor-spacer-inner\"><\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>","protected":false},"excerpt":{"rendered":"<p>Learn what AI hallucinations are, why they happen, and practical techniques \u2014 RAG, provenance, verification, to produce more trusted search results. Includes tips, examples, and fixes&#8230;.<\/p>","protected":false},"author":2,"featured_media":18639,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[97],"tags":[],"class_list":["post-18512","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-learn"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts\/18512","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/comments?post=18512"}],"version-history":[{"count":11,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts\/18512\/revisions"}],"predecessor-version":[{"id":18638,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/posts\/18512\/revisions\/18638"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/media\/18639"}],"wp:attachment":[{"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/media?parent=18512"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/categories?post=18512"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/goteech.io\/zh-hk\/wp-json\/wp\/v2\/tags?post=18512"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}