Tuesday, February 03, 2026

A GLOBAL FRAMEWORK FOR THE ETHICAL USE OF ARTIFICIAL INTELLIGENCE

 A GLOBAL FRAMEWORK FOR THE ETHICAL USE OF ARTIFICIAL INTELLIGENCE

Artificial Intelligence (AI) is no longer a futuristic dream — it's part of our daily lives. Even if we don’t realize it, we are already using AI whenever we interact with social media algorithms, mobile apps, recommendation engines, or smart assistants. What’s new, however, is the growing sense that AI must be bounded by ethical principles — ideally, through a global framework that ensures accountability, fairness, and safety across borders.

That aspiration is laudable. But it also compels us to ask: Do we even have a national framework for the ethical use of AI here in the Philippines?


The gap at home: no binding national AI law yet

Logically, one would expect that before widespread AI adoption, a country should put in place guardrails to guide its ethical use. In the Philippines, we are still lagging in that respect. According to UNESCO, the Philippines has yet to enact legally binding rules or a national framework governing AI. 

We do have promising legislative proposals. In May 2023, House Bill 7913—“Artificial Intelligence (AI) Regulation Act”—was introduced to define guiding principles (accountability, transparency, human-centered values) and even establish an “AI Bill of Rights” for citizens. There’s also House Bill 7396, which would set up an AI Development Authority (AIDA) to license AI developers. Several other bills are pending in Congress (HB 7983, HB 9448, etc.). 

On the executive‐policy side, the Department of Economy, Planning, and Development (DEPDev) recently released a Policy Note on Artificial Intelligence aiming to embed AI into the national development agenda—while flagging challenges such as data governance, infrastructure, and regulatory fragmentation. DEPDev has also envisioned a unified national AI strategy that includes a national data governance framework under the Philippine Statistics Authority. Meanwhile, the government’s Innovation Council approved a think tank to help shape AI policy.

Yet none of these are yet laws. In the interim, the National Privacy Commission issued an advisory in December 2024 on how the existing Data Privacy Act would apply to AI systems that process personal data. That’s a useful stopgap—but hardly sufficient protection for complex and powerful AI systems.

So we find ourselves in a limbo: discussions, bills, policy notes — but no enforceable national standard.


Learning from India: a potential model

We could learn some lessons from India, which has positioned itself as a front-runner in articulating a national AI framework. Its model is not perfect, but it is more advanced than ours in many respects.

India’s National Strategy for AI (via NITI Aayog) was published as early as 2018, and expanded in supplementary documents, notably Part 1: Principles for Responsible AI. These principles draw on the familiar “FAT” framework — Fairness, Accountability, Transparency — as well as philosophical foundations such as “Person Respect, Beneficence, and Justice.” 

India is also implementing a “whole-of-government” approach, recommending an interministerial AI coordination committee, a technical secretariat, and an AI incident database. There’s also discussion of a voluntary code of conduct for AI companies focusing on data labeling, ethical practices, and stakeholder engagement. 

In the financial sector, the Reserve Bank of India (RBI) recently proposed a FREE-AI (Framework for Responsible and Ethical Enablement of AI) for financial institutions. This framework is structured along infrastructure, governance, policy, protection, and assurance — attempting to balance innovation with risk mitigation.

India likewise invests heavily in flagship AI projects under its “Safe & Trusted AI” pillar, tackling bias, deepfake detection, forensic robustness, and more. 

The cumulative effect: India is not just debating moral principles — it is actively building infrastructure, institutional capacity, and pilot systems.


What should we do in the Philippines?

This is where my own suggestions and questions come in. If I were designing a roadmap for the Philippines, here’s where I’d start — and where I’d call on your feedback.

1. Choose a lead department or body
Which agency should take charge of drafting the national AI ethical framework? My top contenders:

  • DICT (Department of Information and Communications Technology) — because AI is in many ways an ICT issue (connectivity, digital services, infrastructure).

  • DOST (Department of Science and Technology) — because R&D, innovation, and science are its domain.

  • DEPDev /National Innovation Council — because the ethical use of AI is ultimately a development policy issue. In fact, the National Innovation Council (part of DEPDev) is already a body under which innovation policy is coordinated.

If no line department steps up, the Presidential Management Staff (PMS) could serve as convener (especially for issuing interim Executive Orders).

2. Use executive orders or administrative issuances as interim tools
Just as we have the Freedom of Information EO, the President could issue an EO or memorandum directing agencies to adopt key AI ethics principles, establish oversight committees, and pilot “AI sandboxes.” That gives us something to act on while Congress deliberates.

3. Enact a “Philippine AI Bill of Rights”
Borrowing concepts from HB 7913, the basic rights should include:

  • Protection from unsafe or discriminatory AI systems

  • Right to explanation / algorithmic transparency

  • Right to privacy and data protection

  • Right to remedy or redress

  • Accountability mechanisms, audit trails, and appeal channels

4. Risk-based, modular regulation
Instead of a one-size-fits-all law, we should adopt tiered regulation: low-risk AI (e.g. recommender engines) get light oversight, but high-impact AI (e.g. credit scoring, health diagnosis, criminal justice) require stronger standards, audits, and oversight. We must avoid stifling innovation for startups and small players.

5. Build institutional capacity & incident reporting
An AI incident reporting system (to log errors, biases, misuse) should be a core part of governance. Also, we need dedicated regulatory units with technical capacity — auditors, algorithmic forensics, ethics officers.

6. Localize the global framework to barangay / community level
You raised an intriguing idea: a barangay-level adaptation of ethical AI governance. Imagine:

  • A local “bias audit toolkit” for barangay-run decision systems (e.g. in aid distribution)

  • Voice authentication modules for local cooperative transactions

  • Deepfake detection protocols for local media or misinformation

  • Modular AI for waste sorting, aquaculture forecasting, or local planning (circular design)

7. Promote public literacy and stakeholder engagement
Ethical AI frameworks cannot just be top-down. We need citizen engagement, civil society input, and public awareness campaigns — so people understand their rights and the risks of AI systems.


Some caveats and tensions

  • Overregulation can suffocate innovation. We must keep regulation principle-based and flexible (not excessively prescriptive).

  • Developing countries must balance ethical guardrails with capacity constraints: regulatory agencies may lack staff or technical know-how.

  • AI governance must be interoperable globally—but also flexible to local contexts (language, culture, social norms).

  • Privacy laws like ours (Data Privacy Act) must dovetail with AI regulation — not contradict or undermine them.

  • Startups and innovators should be involved, to avoid “regulation by bureaucrats” that lock out new ideas.


Closing reflection

A global ethical AI framework is a necessary but insufficient goal if nations like ours don’t first build their own national foundations. For the Philippines, the path is clear: pick a lead institution, issue interim EOs, pass a guiding law, and pilot community-level systems.

Let me leave you with these questions:

  • If you were in charge, which agency would you pick to lead the ethics-AI drafting effort?

  • How would you design a barangay-level AI sandbox that is safe yet empowering?

  • What safeguards would you insist must always stay non-negotiable in any law or EO?

Ramon Ike V. Seneres, www.facebook.com/ike.seneres

iseneres@yahoo.com, senseneres.blogspot.com

09088877282/02-04-2026


Monday, February 02, 2026

ARTIFICIAL INTELLIGENCE FOR MEDICAL KIOSKS

 ARTIFICIAL INTELLIGENCE FOR MEDICAL KIOSKS

Medical kiosks have long been a quiet fixture in the healthcare landscape — enclosures or terminals where patients can do self‐checks, basic triage, or upload vital signs. The novelty today lies not in the structure, but in what brains we’re putting into those kiosks. In China, the latest wave adds artificial intelligence, telemedicine links, and even automated dispensing of simple medicines.

Let me unpack why this matters, what’s promising, and how the Philippines must think ahead — lest we again find ourselves importing not just the hardware, but the knowledge.


From kiosk to AI clinic

To set expectations straight: you don’t need a kiosk to use AI in health care. A room, cubicle, or “health corner” with the same sensors and software could serve. But kiosks bring advantages: modularity, ease of deployment, visibility, 24/7 access, and sometimes outdoor or semi-outdoor operation. In hospitals where space is precious, kiosks can relieve congestion; in public spaces they bring healthcare to people’s pathways.

What is new in China is the infusion of AI into the diagnostic workflow. These kiosks can:

  • Measure vital signs (blood pressure, heart rate, oxygen saturation)

  • Offer AI-based preliminary diagnosis or risk stratification

  • Connect the user to remote doctors via telemedicine

  • Dispense basic medicines when indicated

Some early reports say patients receive reports instantly, and when appropriate, walk away with a prescription filled on the spot.

These kiosks are being placed not just in hospitals, but in transit hubs, community centers, clinics, and high-footfall public areas. The aim is obvious: reduce waiting times, reduce physician burden, and democratize access to basic healthcare.


The ambition behind China’s move

China’s healthcare AI push is bold and well funded. Their “Agent Hospital” is perhaps the most dramatic manifestation. Conceived by Tsinghua University’s AIR (Institute for AI Industry Research), this is not (yet) a physical hospital for humans, but a simulated hospital populated by AI “doctor agents.” 

These AI agents are trained on synthetic patient cases and interact in a closed environment, simulating all clinical processes (triage, diagnosis, treatment, follow-up). After processing tens of thousands of cases, they scored 93.06 % diagnostic accuracy on medical exam benchmarks such as MedQA. 

In media accounts, they claim the capacity to “treat” over 10,000 virtual patients in days — a volume that might take human physicians years. 

China is also integrating AI into real hospitals. China Daily reports that AI diagnostic tools are rolling into urban hospitals first, with rural clinics to follow via telemedicine. Some AI tools are already subject to ethical review protocols and data anonymization safeguards. 

Beyond the hospital realm, companies like Ant Group have launched nearly 100 AI “doctor agents” in their Alipay app — giving users instant consultations based on virtual clinicians modeled after real doctors. 

Thus, China is attacking the problem on multiple fronts — kiosk + AI + app. The kiosk is just one vector.


Market scale & trends

The global medical kiosk market is growing rapidly. In 2023 it was estimated at about USD 1.42 billion, and is forecast to reach USD 3.76 billion by 2030, with a compound annual growth rate (CAGR) of ~15%. Other forecasts place the 2024 value at USD 1.76 billion and a 2032 target of USD 5.34 billion.

As for China specifically, one study projects its health kiosk market to grow from USD 534 million in 2025 to USD 1.46 billion in 2031, a CAGR of ~17.9 %. Another source reports China’s medical kiosk revenues in 2022 as USD 69.8 million, rising to USD 249.7 million by 2030. 

Meanwhile, China’s broader AI healthcare market is booming. In 2023, it was valued at about USD 1.59 billion, and projections suggest growth to USD 7.33 billion by 2028, possibly reaching USD 18.88 billion by 2030. 

These figures show that medical kiosks with embedded AI are not niche side projects — they are part of a fast-rising segment of digital health.


What this means for us — possibilities and pitfalls

China’s bold steps present both opportunity and warning for the Philippines.

Opportunities:

  1. Decentralized triage at barangay / health center level. Kiosks could help in rural or urban poor areas to filter patients, catch early signs of hypertension, hypoxia, diabetes, or respiratory issues.

  2. Load-balancing hospitals. Let kiosks handle minor complaints, offload nonurgent cases, free doctors for complex work.

  3. Health surveillance and data. Aggregated anonymized data could detect spikes, emerging clusters, or patterns in community health.

  4. Local innovation. If we build our own software, AI models, sensor modules, we retain intellectual property, adapt to local language, disease profiles, and regulations.

  5. Synergy across agencies. Indeed, DOH, DOST, DICT (or its successor) must coordinate. Universities such as UP Manila, PGH, ADMU, DLSU, and engineering / computing departments can be involved.

Pitfalls and challenges:

  • Regulation & safety. AI misdiagnosis is a risk. We must develop frameworks — who is liable when AI errs? What oversight?

  • Standards & interoperability. Kiosks must integrate with hospital electronic medical records (EMRs), health insurance databases, and identity systems.

  • Infrastructure gaps. In many barangays, reliable electricity, internet connectivity, or climate control may not be assured.

  • User trust & literacy. Elderly patients may distrust or misunderstand kiosks.

  • Data privacy & security. Health data is sensitive. Robust encryption, anonymization, and governance are essential.

  • Cost / maintenance. Kiosks in public spaces endure wear, vandalism, environmental stress — maintenance must be factored.

  • Overreliance & de-skilling. If doctors defer too much to AI, skills may atrophy or judgment may be lost in edge cases.


My suggestions and questions for policy design

  1. Pilot first, scale later. Start in Metro Manila barangays or provinces with partner hospitals, test trust, accuracy, workflows.

  2. Open-architecture model. Use modular hardware/software so improvements over time are not locked in.

  3. Local AI training. Use Filipino patient data (de-identified) to train models attuned to local disease prevalence, demographics, language, dialects.

  4. Public-private partnerships. Engage local startups, universities, hospitals.

  5. Regulation ahead of rollout. DOH must lead in setting safety, audit, validation, certification rules for AI health devices.

  6. Human in the loop. AI should assist, not replace — kiosks must flag ambiguous cases for human doctors.

  7. Community engagement. Educate people: how to use kiosks, interpret results, trust them.

  8. Redundancy & fallback. Kiosks should gracefully degrade to manual mode when connectivity or hardware fails.

Questions to ponder:

  • What kinds of diseases should the kiosk AI first focus on (e.g. hypertension, respiratory, diabetes)?

  • How many false positives / negatives would be acceptable?

  • How to integrate with PhilHealth, DOH systems, private hospitals?

  • Could we build the kiosk shells locally (manufactured in PH)?

  • What funding model — government, donor, insurance reimbursements?


Final thought

China’s AI-powered medical kiosks are not mere gadgets — they signal a pivot in how health services may be delivered in high volume, decentralized settings. The true innovation lies not in the kiosk itself, but in the AI logic, data flows, integration, and governance behind it.

If we let this moment slip, we risk acting only as importers of finished systems, losing the chance to domesticate the talent, the algorithms, the capacity. We have capable doctors, engineers, computer scientists, universities — we must act now to build not just kiosks, but our AI health ecosystem.

Let us design with foresight, pilot with caution, but scale with boldness. The health of millions may depend on it.

Ramon Ike V. Seneres, www.facebook.com/ike.seneres

iseneres@yahoo.com, senseneres.blogspot.com

09088877282/02-03-2026


Philippines Best of Blogs Link With Us - Web Directory OnlineWide Web Directory