<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>LivingMeta Public Governance</title>
    <link>https://public-governance.livingmeta.ai</link>
    <description>Latest research papers, blog posts, and grey literature — curated and classified by AI</description>
    <language>en</language>
    <lastBuildDate>Tue, 21 Apr 2026 13:09:46 GMT</lastBuildDate>
    <atom:link href="https://public-governance.livingmeta.ai/feed.xml" rel="self" type="application/rss+xml"/>
    
    <item>
      <title>Annex Paper &quot;Learning from COVID-19 pandemic in governing Smart Cities&quot;</title>
      <link>https://doi.org/10.30827/digibug.72759</link>
      <description>of the</description>
      <pubDate>Sat, 01 Jan 2022 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4211207705</guid>
      <source url="https://public-governance.livingmeta.ai">LivingMeta Public Governance</source>
      <category>crisis_governance</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Talent Management: “Here Come the Digital Workers!”</title>
      <link>https://doi.org/10.1287/orms.2025.02.15</link>
      <description>We need to develop a culture of collaborative intelligence. Responsible organizations are figuring out how to enable their top talent with smart computers, instead of replacing them. Leading organizations invest in empowering (augmenting) their employees with artificial intelligence (AI) so they can create higher value in better ways, as well as automating processes that are lower value and do not engage employees.</description>
      <pubDate>Tue, 17 Jun 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4411364153</guid>
      <source url="https://public-governance.livingmeta.ai">LivingMeta Public Governance</source>
      <category>organizational_governance</category>
      <category>organizational_culture</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Transforming+Sepsis</title>
      <link>https://doi.org/10.33009/fsu_d1621397-1660-4e89-8414-01b681fc9329</link>
      <description>Background and SignificanceSepsis remains a major global healthcare challenge, contributing to substantial morbidity, mortality, and economic burden.It is defined as life-threatening organ dysfunction resulting from a dysregulated host response to infection, with an estimated global incidence affecting millions and accounting for nearly 20% of all deaths worldwide (CDC, 2024).Despite advancements in critical care, including standardized protocols and evidence-based approaches such as the SEP-1, sepsis-related outcomes remain suboptimal.Variability in adherence to clinical guidelines, challenges in early recognition, and delays in timely intervention impede effective sepsis management across inpatient settings, including the emergency department, at the 350-bed urban academic teaching hospital targeted for this program evaluation.These barriers contribute to inconsistencies in care delivery, potentially exacerbating patient morbidity and mortality despite the availability of evidence-based protocols.Addressing these challenges is imperative to improving patient outcomes and reducing the overall healthcare burden associated with sepsis.When implemented comprehensively, sepsis bundles yield superior treatment outcomes compared to individual interventions and patient self-management support is a critical component of these protocols or bundles (Srzic et al., 2022).According to Evans et al. ( 2021), sepsis has been a recognized clinical challenge since the first consensus definitions were established in 1991.Efforts to standardize care began with the launch of the Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock (SSC) in 2002, followed by multiple revisions to sepsis treatment guidelines.While the incidence of sepsis is increasing, mortality rates have declined due to 4 initiatives such as the SSC and standardized sepsis care bundles.However, the overall burden of sepsis-related deaths continues to rise, emphasizing the need for comprehensive implementation of these sepsis interventions (Srzi et al., 2022).Clinical challenges contribute to preventable complications, including recurrent infections and re-admissions, which are associated with both patient safety risks and financial consequences.While reducing sepsis readmissions can lower direct healthcare costs, the loss of reimbursement revenue may impact hospital financial incentives, making it essential to balance cost-saving measures with investments quality care (Evans et al., 2021).The SEP-1 performance metric, introduced by CMS in 2015, aims to enhance sepsis care through a structured, protocoldriven approach that includes timely administration of antibiotics, lactate measurement, fluid resuscitation, and hemodynamic monitoring.While its adoption has been widespread, its impact on patient outcomes remains debated, with studies yielding mixed results.Some evidence suggests that improved SEP-1 compliance is associated with reduced mortality, whereas other research indicates that strict adherence may not always lead to meaningful clinical benefits (Evans et al., 2021).Additionally, implementation poses logistical and operational challenges, requiring extensive multidisciplinary coordination, documentation, and data abstraction, which can strain healthcare resources and divert attention from direct patient care (P.Moreno-Franco, personal communication, March 10, 2025).This organization demonstrates favorable patient outcomes, with increased awareness and guideline-driven interventions contributing to improved sepsis recognition and treatment.However, challenges persist in meeting SEP-1 performance criteria, highlighting areas for further optimization in adherence to evidence-based sepsis management protocols.Addressing this issue</description>
      <pubDate>Wed, 30 Jul 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4412745253</guid>
      <source url="https://public-governance.livingmeta.ai">LivingMeta Public Governance</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>EVERY BODY COUNTS: A Global Citizen-Science Initiative to Rebuild Medical Data for All of Humanity</title>
      <link>https://doi.org/10.5281/zenodo.17790714</link>
      <description>EVERY BODY COUNTS: A Global Citizen-Science Initiative to Rebuild Medical Data for All of Humanity White Paper Draft v0.9 — CollectiveOS Edition Prepared for: GATA → PRIME Review, GitHub Commit, and Zenodo DOI Author: Human Global Science Collective (HGSC) | Version 2.0 | 2026 Draft Executive Summary The history of modern medicine is, in many respects, a history of exclusion. Despite the extraordinary technological triumphs of the 21st century—from the rapid development of mRNA vaccines to the dawn of CRISPR gene editing—the foundational data upon which these innovations rest is critically flawed. It is a dataset built primarily on a single demographic: individuals of European ancestry, largely male, and socioeconomically advantaged. This systemic bias, which critics have termed &quot;data apartheid&quot; and global health bodies acknowledge as a &quot;mounting crisis,&quot; renders vast swathes of the human population invisible to the precision medicine revolution. This white paper, Every Body Counts, introduces a comprehensive paradigm shift in how human biological data is collected, governed, and utilized. We propose the transition from an extractive model of medical research—where data is mined from passive subjects by centralized institutions—to a sovereign, citizen-science model powered by the CollectiveOS framework. By leveraging the Governance, Audit, Trust, and Authority (GATA) model, we aim to rebuild the global medical dataset from the ground up, ensuring that every biological reality is represented, quantified, and cured. We outline the deployment of CollectiveOS v2.0, a sovereign mobile super-node architecture that democratizes compute and data storage.1 We detail the External AI Motherboard hardware, a patent-free modular system designed to process genomic and phenotypic data at the edge, preserving privacy while contributing to a global &quot;Knowledge Commons&quot;.1 Furthermore, we integrate gamified citizen science, utilizing blockchain-verified &quot;Proof of Impact&quot; to incentivize participation among historically marginalized communities.3 This is not merely a research proposal; it is a governance restructuring of how human biology is measured. It is a call to arms for the Human Global Science Collective to correct the errors of 1977 and 1993, and to ensure that in the era of AI-driven medicine, no body is left behind. Part I: The Crisis of Representation 1.1 The Legacy of Exclusion: Anatomy of a Data Gap To understand the necessity of the Every Body Counts initiative, one must first confront the historical trajectory that led to the current homogeneity of medical data. The exclusion of women and minorities was not accidental; it was, for decades, explicit federal policy. In 1977, the US Food and Drug Administration (FDA) issued a guideline titled &quot;General Considerations for the Clinical Evaluation of Drugs,&quot; which recommended the exclusion of women of childbearing potential from Phase I and early Phase II clinical trials.5 While the ostensible goal was to prevent tragedies similar to the thalidomide disaster—where a sedative caused thousands of severe birth defects in Europe and Canada—the policy was applied with a broad, paternalistic brush.5 The exclusion applied not just to pregnant women, or those trying to conceive, but to any premenopausal female &quot;capable&quot; of becoming pregnant, regardless of their contraceptive use, single status, or the sexual sterilization of their partners.5 This effectively banned nearly all women aged 15 to 50 from the early stages of drug development, where critical safety and dosage data are established. The &quot;protective&quot; paternalism of the 1977 policy resulted in a &quot;male norm&quot; for medical data. For nearly two decades, pharmaceutical products were tested almost exclusively on male physiology, with dosages, toxicity thresholds, and side-effect profiles extrapolated—often dangerously—to women.7 The medical establishment operated under the assumption that female physiology was identical to male physiology, merely smaller and complicated by &quot;hormonal noise&quot; that interfered with clean data sets.8 The tide began to turn in the late 1980s, driven by the Congressional Caucus for Women&apos;s Issues, which requested a General Accounting Office (GAO) investigation into the National Institutes of Health (NIH) implementation of inclusion guidelines.5 This pressure culminated in the NIH Revitalization Act of 1993. This landmark legislation mandated that NIH-funded trials include women and minorities as subjects in clinical research.5 Crucially, it required that Phase III clinical trials have sample sizes adequate to support a &quot;valid analysis&quot; of potential differences in intervention effects between sexes and racial subgroups.9 However, legislation does not equal implementation. While the Revitalization Act changed the requirements for receiving federal funding, it did not fundamentally alter the incentives of the pharmaceutical industry or the infrastructure of recruitment. The FDA, unlike the NIH, is not strictly bound by the 1993 Act in the same way, and while it established an Office of Women&apos;s Health (OWH) to advocate for participation, the regulatory mandate for private industry remains less stringent than for public grants.6 Three decades later, the gap persists. While women now make up a larger percentage of total trial participants, they remain significantly underrepresented in early-phase trials and in specific therapeutic areas like cardiovascular disease. The disparity is even more acute for racial and ethnic minorities. The &quot;substantial evidence&quot; exception in the 1993 Act allowed researchers to bypass diversity requirements if they could argue there was no evidence of a difference between subgroups—a circular logic, as the lack of evidence stemmed from the lack of prior study.9 1.2 The Current State of Genomic Inequality Today, the statistics remain damning. A 2024 review of the GWAS Catalog (Genome-Wide Association Studies) reveals a persistent, overwhelming bias. Despite making up less than 16% of the global population, individuals of European ancestry constitute 87.77% of all participants in genomic association studies.11 Individuals of African descent—who possess the highest genetic diversity on the planet due to the &quot;Out of Africa&quot; evolutionary bottleneck—make up a mere 0.16% of these datasets.11 This is a staggering scientific failure. By focusing almost exclusively on European genomes, we are effectively studying a subset of human genetic variation and treating it as the whole. We miss critical insights into disease etiology, rare variants, and gene-environment interactions that could benefit all of humanity.12 The disparity extends to other groups as well. Hispanic and Latin American populations, who represent a complex admixture of Indigenous American, European, and African ancestries, comprise only 1.71% of GWAS participants.11 Asian populations fare slightly better at 5.33%, but this is still woefully disproportionate to their share of the global population.11 Table 1: The Genomic Diversity Gap (GWAS Catalog 2024) Ancestry Category Percentage of Global Population (Approx.) Percentage of GWAS Participants Representation Ratio (Index) European ~16% 87.77% 5.48 (Over-represented) Asian ~60% 5.33% 0.09 (Severely Under-represented) African ~17% 0.16% 0.01 (Near Invisible) Hispanic/Latinx ~8% 1.71% 0.21 (Under-represented) Other/Mixed ~5% 1.31% 0.26 (Under-represented) This imbalance creates a &quot;transferability problem.&quot; Polygenic Risk Scores (PRS)—predictive tools that estimate a person&apos;s genetic risk for diseases like diabetes or breast cancer—are trained on these European-dominated datasets. When these tools are applied to non-European populations, their accuracy plummets, often rendering them useless or, worse, misleading.12 We are building a future of precision medicine that works precisely for one group and fails precisely for everyone else. 1.3 The &quot;Yentl Syndrome&quot;: Cardiovascular Consequences The consequences of this data gap are measured in lives lost. Nowhere is this more evident than in cardiovascular disease (CVD) in women. Historically framed as a &quot;man&apos;s disease,&quot; CVD is the leading killer of women globally, yet it remains woefully under-diagnosed and under-treated. Bernadine Healy, the first female director of the NIH, coined the term &quot;Yentl Syndrome&quot; to describe this phenomenon: women are only treated for heart disease if they present like men.8 The &quot;classic&quot; Hollywood heart attack—crushing chest pain radiating down the left arm—is a male-pattern symptom. While many women do experience chest pain, they are far more likely than men to present with &quot;atypical&quot; symptoms such as nausea, dizziness, extreme fatigue, jaw pain, or shortness of breath.14 Current epidemiological data indicates that women are 50% more likely than men to be misdiagnosed following a heart attack.15 This disparity stems directly from the &quot;male pattern&quot; being codified as the universal standard in medical textbooks and diagnostic algorithms. When a woman presents with nausea and fatigue, a medical establishment trained on male-centric data is prone to misdiagnose her with anxiety, indigestion, or a virus, sending her home while her heart muscle dies.15 The British Heart Foundation reports that women who suffer a STEMI (the most serious type of heart attack) have a 59% greater chance of misdiagnosis compared to men.16 Even when diagnosed correctly, women receive lower standards of care. They are less likely to be prescribed life-saving statins, ACE inhibitors, or blood thinners compared to men with the same condition.15 They are less likely to receive coronary angiography or interventions.15 This systematic failure is a direct downstream effect of the upstream data void. When clinical trials for heart failure treatments are composed of 70-80% men, the resulting protocols are inevitably optim</description>
      <pubDate>Tue, 02 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7108326973</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>EVERY BODY COUNTS: A Global Citizen-Science Initiative to Rebuild Medical Data for All of Humanity</title>
      <link>https://doi.org/10.5281/zenodo.17790715</link>
      <description>EVERY BODY COUNTS: A Global Citizen-Science Initiative to Rebuild Medical Data for All of Humanity White Paper Draft v0.9 — CollectiveOS Edition Prepared for: GATA → PRIME Review, GitHub Commit, and Zenodo DOI Author: Human Global Science Collective (HGSC) | Version 2.0 | 2026 Draft Executive Summary The history of modern medicine is, in many respects, a history of exclusion. Despite the extraordinary technological triumphs of the 21st century—from the rapid development of mRNA vaccines to the dawn of CRISPR gene editing—the foundational data upon which these innovations rest is critically flawed. It is a dataset built primarily on a single demographic: individuals of European ancestry, largely male, and socioeconomically advantaged. This systemic bias, which critics have termed &quot;data apartheid&quot; and global health bodies acknowledge as a &quot;mounting crisis,&quot; renders vast swathes of the human population invisible to the precision medicine revolution. This white paper, Every Body Counts, introduces a comprehensive paradigm shift in how human biological data is collected, governed, and utilized. We propose the transition from an extractive model of medical research—where data is mined from passive subjects by centralized institutions—to a sovereign, citizen-science model powered by the CollectiveOS framework. By leveraging the Governance, Audit, Trust, and Authority (GATA) model, we aim to rebuild the global medical dataset from the ground up, ensuring that every biological reality is represented, quantified, and cured. We outline the deployment of CollectiveOS v2.0, a sovereign mobile super-node architecture that democratizes compute and data storage.1 We detail the External AI Motherboard hardware, a patent-free modular system designed to process genomic and phenotypic data at the edge, preserving privacy while contributing to a global &quot;Knowledge Commons&quot;.1 Furthermore, we integrate gamified citizen science, utilizing blockchain-verified &quot;Proof of Impact&quot; to incentivize participation among historically marginalized communities.3 This is not merely a research proposal; it is a governance restructuring of how human biology is measured. It is a call to arms for the Human Global Science Collective to correct the errors of 1977 and 1993, and to ensure that in the era of AI-driven medicine, no body is left behind. Part I: The Crisis of Representation 1.1 The Legacy of Exclusion: Anatomy of a Data Gap To understand the necessity of the Every Body Counts initiative, one must first confront the historical trajectory that led to the current homogeneity of medical data. The exclusion of women and minorities was not accidental; it was, for decades, explicit federal policy. In 1977, the US Food and Drug Administration (FDA) issued a guideline titled &quot;General Considerations for the Clinical Evaluation of Drugs,&quot; which recommended the exclusion of women of childbearing potential from Phase I and early Phase II clinical trials.5 While the ostensible goal was to prevent tragedies similar to the thalidomide disaster—where a sedative caused thousands of severe birth defects in Europe and Canada—the policy was applied with a broad, paternalistic brush.5 The exclusion applied not just to pregnant women, or those trying to conceive, but to any premenopausal female &quot;capable&quot; of becoming pregnant, regardless of their contraceptive use, single status, or the sexual sterilization of their partners.5 This effectively banned nearly all women aged 15 to 50 from the early stages of drug development, where critical safety and dosage data are established. The &quot;protective&quot; paternalism of the 1977 policy resulted in a &quot;male norm&quot; for medical data. For nearly two decades, pharmaceutical products were tested almost exclusively on male physiology, with dosages, toxicity thresholds, and side-effect profiles extrapolated—often dangerously—to women.7 The medical establishment operated under the assumption that female physiology was identical to male physiology, merely smaller and complicated by &quot;hormonal noise&quot; that interfered with clean data sets.8 The tide began to turn in the late 1980s, driven by the Congressional Caucus for Women&apos;s Issues, which requested a General Accounting Office (GAO) investigation into the National Institutes of Health (NIH) implementation of inclusion guidelines.5 This pressure culminated in the NIH Revitalization Act of 1993. This landmark legislation mandated that NIH-funded trials include women and minorities as subjects in clinical research.5 Crucially, it required that Phase III clinical trials have sample sizes adequate to support a &quot;valid analysis&quot; of potential differences in intervention effects between sexes and racial subgroups.9 However, legislation does not equal implementation. While the Revitalization Act changed the requirements for receiving federal funding, it did not fundamentally alter the incentives of the pharmaceutical industry or the infrastructure of recruitment. The FDA, unlike the NIH, is not strictly bound by the 1993 Act in the same way, and while it established an Office of Women&apos;s Health (OWH) to advocate for participation, the regulatory mandate for private industry remains less stringent than for public grants.6 Three decades later, the gap persists. While women now make up a larger percentage of total trial participants, they remain significantly underrepresented in early-phase trials and in specific therapeutic areas like cardiovascular disease. The disparity is even more acute for racial and ethnic minorities. The &quot;substantial evidence&quot; exception in the 1993 Act allowed researchers to bypass diversity requirements if they could argue there was no evidence of a difference between subgroups—a circular logic, as the lack of evidence stemmed from the lack of prior study.9 1.2 The Current State of Genomic Inequality Today, the statistics remain damning. A 2024 review of the GWAS Catalog (Genome-Wide Association Studies) reveals a persistent, overwhelming bias. Despite making up less than 16% of the global population, individuals of European ancestry constitute 87.77% of all participants in genomic association studies.11 Individuals of African descent—who possess the highest genetic diversity on the planet due to the &quot;Out of Africa&quot; evolutionary bottleneck—make up a mere 0.16% of these datasets.11 This is a staggering scientific failure. By focusing almost exclusively on European genomes, we are effectively studying a subset of human genetic variation and treating it as the whole. We miss critical insights into disease etiology, rare variants, and gene-environment interactions that could benefit all of humanity.12 The disparity extends to other groups as well. Hispanic and Latin American populations, who represent a complex admixture of Indigenous American, European, and African ancestries, comprise only 1.71% of GWAS participants.11 Asian populations fare slightly better at 5.33%, but this is still woefully disproportionate to their share of the global population.11 Table 1: The Genomic Diversity Gap (GWAS Catalog 2024) Ancestry Category Percentage of Global Population (Approx.) Percentage of GWAS Participants Representation Ratio (Index) European ~16% 87.77% 5.48 (Over-represented) Asian ~60% 5.33% 0.09 (Severely Under-represented) African ~17% 0.16% 0.01 (Near Invisible) Hispanic/Latinx ~8% 1.71% 0.21 (Under-represented) Other/Mixed ~5% 1.31% 0.26 (Under-represented) This imbalance creates a &quot;transferability problem.&quot; Polygenic Risk Scores (PRS)—predictive tools that estimate a person&apos;s genetic risk for diseases like diabetes or breast cancer—are trained on these European-dominated datasets. When these tools are applied to non-European populations, their accuracy plummets, often rendering them useless or, worse, misleading.12 We are building a future of precision medicine that works precisely for one group and fails precisely for everyone else. 1.3 The &quot;Yentl Syndrome&quot;: Cardiovascular Consequences The consequences of this data gap are measured in lives lost. Nowhere is this more evident than in cardiovascular disease (CVD) in women. Historically framed as a &quot;man&apos;s disease,&quot; CVD is the leading killer of women globally, yet it remains woefully under-diagnosed and under-treated. Bernadine Healy, the first female director of the NIH, coined the term &quot;Yentl Syndrome&quot; to describe this phenomenon: women are only treated for heart disease if they present like men.8 The &quot;classic&quot; Hollywood heart attack—crushing chest pain radiating down the left arm—is a male-pattern symptom. While many women do experience chest pain, they are far more likely than men to present with &quot;atypical&quot; symptoms such as nausea, dizziness, extreme fatigue, jaw pain, or shortness of breath.14 Current epidemiological data indicates that women are 50% more likely than men to be misdiagnosed following a heart attack.15 This disparity stems directly from the &quot;male pattern&quot; being codified as the universal standard in medical textbooks and diagnostic algorithms. When a woman presents with nausea and fatigue, a medical establishment trained on male-centric data is prone to misdiagnose her with anxiety, indigestion, or a virus, sending her home while her heart muscle dies.15 The British Heart Foundation reports that women who suffer a STEMI (the most serious type of heart attack) have a 59% greater chance of misdiagnosis compared to men.16 Even when diagnosed correctly, women receive lower standards of care. They are less likely to be prescribed life-saving statins, ACE inhibitors, or blood thinners compared to men with the same condition.15 They are less likely to receive coronary angiography or interventions.15 This systematic failure is a direct downstream effect of the upstream data void. When clinical trials for heart failure treatments are composed of 70-80% men, the resulting protocols are inevitably optim</description>
      <pubDate>Tue, 02 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7108348781</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>multi_domain</category>
      <category>transparency_openness</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Data Sheet 1_The impact of social capital and government support on farmers’ willingness to pay for road governance: a case study of rural road governance in China.zip</title>
      <link>https://doi.org/10.3389/fenvs.2025.1514402.s001</link>
      <description>The global environmental governance landscape is currently confronted with complex and pressing challenges, while rural road environments play a crucial role in providing essential services to rural ecosystems, making them a key factor in the success or failure of governance. Based on the 2018 China Labor Dynamic Survey Database (CLDS), this article approaches the issue from the perspective of rural environmental governance and uses the informal social networks of rural farmers as a starting point to construct an analytical framework for social capital and farmers’ willingness to engage in environmental governance. Additionally, to examine the close link between welfare policies and farmers’ participation in public affairs, this article specifically focuses on the potential moderating effect of government support (agricultural subsidies) and uses the instrumental variable method to mitigate its endogeneity. The study shows that: (1) Both improvements in social networks and social trust can promote farmers’ willingness to engage in environmental governance. However, in the process of social participation, exposure to cutting-edge green technologies is essential to precisely activate individuals’ willingness to engage in environmental governance. (2) In promoting individual farmer participation in environmental protection public affairs, it is crucial to emphasize the incentives provided by welfare policies, increase agricultural subsidies, and expand their depth and breadth of coverage. (3) Government departments should enhance the industrial vitality in the northeastern regions, accelerate industrial transformation, invigorate economic activity, and prevent population loss from causing disruptions in villages. In the western regions, context-specific cultural intervention measures should be developed. Through long-term and continuous “cultural governance” practices, a bottom-up, progressive approach should be adopted to stimulate public enthusiasm for participation in non-interest-driven public affairs and achieve self-sufficiency in the cultural field.</description>
      <pubDate>Thu, 03 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7110860352</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>policy_governance</category>
      <category>citizen_participation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Data Sheet 1_Health inequalities for China’s low-income population: trends, subgroup differences, and influencing factors, 2010–2022.docx</title>
      <link>https://doi.org/10.3389/fpubh.2025.1569726.s001</link>
      <description>Objective Health inequality is a global challenge, with low-income populations often facing higher health risks. This study aims to systematically analyze the current status, trends, and influencing factors of health inequalities for China’s low-income population. Methods Utilizing panel data from the China Family Panel Studies (CFPS) from 2010 to 2022, the low-income population was identified using a threshold of 67% of median income. Health inequalities were measured across four dimensions: self-rated health, mental health, two-week health, and chronic diseases status, using the Erreygers Index (EI) and Wagstaff Index (WI). Recentered Influence Function (RIF) regression and RIF-Oaxaca decomposition were employed to examine influencing factors of health inequalities and sources of disparities across urban–rural, gender, and age dimensions. Results From 2010 to 2022, the degree of health inequality was significantly higher for the low-income group compared to the middle and high-income groups in China. Inequalities in self-rated health and chronic diseases status showed an increasing trend for the low-income population. Per capita household income (PCHI) was a key factor, exhibiting a significant negative impact on inequalities in self-rated health and mental health (p &lt; 0.01). Age had an inverted U-shaped effect on health inequalities, while household size significantly and negatively influenced disparities in self-rated health and two-week health (p &lt; 0.01). Differences in the level of medical expertise of the visited institutions significantly affected chronic disease status inequalities (p &lt; 0.01). The PCHI was the primary source of health inequality disparities across urban–rural, gender, and age groups, with the older adult low-income group experiencing significantly higher levels of health inequality compared to the non-older adult group. Conclusion Health inequalities for the low-income population in China are becoming increasingly severe, particularly pronounced among older adult and rural groups. The study recommends implementing interventions across multiple dimensions, including income support, healthcare accessibility, and family care support, while adopting differentiated policies tailored to the characteristics of various groups. Particular attention should be given to intersectionally disadvantaged groups such as low-income older adult individuals in rural areas.</description>
      <pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7111250082</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>policy_governance</category>
      <category>social_equity</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Managing AI Driven Customs Modernisation</title>
      <link>https://doi.org/10.5281/zenodo.18300423</link>
      <description>This repository contains anonymised survey data and analysis code supporting the manuscript “Bridging the Bureaucracy–Agility Divide: A Hybrid Framework for AI-Driven Customs Modernisation in Bangladesh.” The materials are used to analyse AI-enabled decision support, governance, and organisational agility in public-sector customs administration. All data are fully anonymised, and no personal identifiers are included. The repository is provided to support transparency and reproducibility in applied artificial intelligence and public administration research.</description>
      <pubDate>Mon, 19 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7124676832</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>digital_transformation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Managing AI Driven Customs Modernisation</title>
      <link>https://doi.org/10.5281/zenodo.18300424</link>
      <description>This repository contains anonymised survey data and analysis code supporting the manuscript “Bridging the Bureaucracy–Agility Divide: A Hybrid Framework for AI-Driven Customs Modernisation in Bangladesh.” The materials are used to analyse AI-enabled decision support, governance, and organisational agility in public-sector customs administration. All data are fully anonymised, and no personal identifiers are included. The repository is provided to support transparency and reproducibility in applied artificial intelligence and public administration research.</description>
      <pubDate>Mon, 19 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7124757389</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>digital_transformation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>HIPAA AUDITING IN CLOUD COMPUTING ENVIROMENT</title>
      <link>https://doi.org/10.6084/m9.figshare.990063.v3</link>
      <description>The rise of cloud computing has been driven by the benefits, the cheapest purveyor of application hosting, storage, infrastructure, huge cost savings with low initial investment, elasticity and scalability, ease of adoption, operational efficiency, on-demand resources. With all the security and Privacy Laws in the Health Care field today anyone that works with confidential information should know how to protect that information. The Health Insurance Portability and Accountability Act (HIPAA) privacy and security regulations are two crucial provisions in the protection of healthcare data. Governance, compliance and auditing are becoming as important pedagogical subjects as long established financial auditing and financial control. Designing sound IT governance, compliance, and auditing is a challenging task. This Thesis elaborates the concept of HIPAA compliance in cloud computing by taking a look at the history and dynamics and how Cloud computing changes the astir of certain parts of HIPAA Security requirements. We briefly describe the cyber warfare as a premise to enforce the reasons for complying with government regulations for information systems. The purpose of this Thesis is to explain the importance of HIPAA and research what it takes for Healthcare data to be HIPAA Compliant. Also, explaining what is expected of Healthcare industries if there is an audit and how does HIPAA Auditing play a big part in HIPAA compliance. The Cloud is a platform where all users not only store their data but also used the services and software provided by Cloud Service Provider (CSP). As we know the service provided by the cloud is very economical due to which the user pay only for what he used. This is a platform where data owner remotely store their data in the cloud to enjoy the high quality services and applications. The user can access the data, store the data and use the data. In a Corporate world there are large number of client who accessing their data and modifying a data. To manage this data we use third party auditor (TPA), that will check the reliability of data but it increases the data integrity risk of data owner. Since TPA not only read the data but also he can modify the data, therefore a novel approach should be provided who solved this problem. We first examine the problem and new potential security scheme used to solve this problem. Our algorithm encrypt the content of file at user level which ensure the data owner and client that there data are intact.</description>
      <pubDate>Wed, 01 Jan 2014 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W2772833232</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>BUILDING INFORMATION MODELING (BIM) CULTURAL AND STRATEGIC CAPABILITIES FOR DIGITILISATION IN CONSTRUCTION FIRMS</title>
      <link>https://doi.org/10.22541/au.158074229.96789331</link>
      <description>As digitilisation is being applied in redefining products and business models world-wide, evidence abound in the construction industry as a sector that is slow to its adoption. While digitilisation tools have been applied in modifying processes/procedures in the global North; a larger percentage of the sector in the global South is yet to be disrupted. For indigenous firms to join the rapid transformation wheel, this study reviews the interrelationship between digitilisation and building information modeling. The study objectives are to examine the prevalence of cultural and strategic capability, evaluate the relationship between cultural orientation and strategic capability as well as predict a model of building information adoption from culture and strategy. The study population was drawn from the list of construction firms registered with the Lagos State Tender board, list of registered construction firms from the Institute and specific listed firms on the internet. Factor Analysis, Correlation and Regression were the adopted statistical tools. The results revealed production; task and goal attainment; information/communication technology; workforce; innovation, learning and knowledge management as well as conflict and dispute resolution as the prevalent cultural orientations. The availability of resources to communicate, interact and collaborate digitally and leadership capability to organise and coordinate digitally are the top two strategic capabilities. Consequently, 3 out of every 5 firms have moderate awareness on BIM implementation. It was concluded that the level of agreement on the adoption of the culture and the strategy did not reflect on the level of BIM adoption model. Since the results revealed that the existing orientation and strategy contribute about a tenth of BIM adoption model; the firms’ leadership need cultural re-orientation from the client angle and from business environment. On strategy, the firms need support from institutions/government on policies that will cushion the effect of the provision of resources for transformation.</description>
      <pubDate>Mon, 03 Feb 2020 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W3133144588</guid>
      <source url="https://public-governance.livingmeta.ai">Authorea</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Faculty Opinions recommendation of A clinically applicable approach to continuous prediction of future acute kidney injury.</title>
      <link>https://doi.org/10.3410/f.736293810.793564412</link>
      <description>The early prediction of deterioration could have an important role in supporting healthcare professionals, as an estimated 11% of deaths in hospital follow a failure to promptly recognize and treat deteriorating patients1. To achieve this goal requires predictions of patient risk that are continuously updated and accurate, and delivered at an individual level with sufficient context and enough time to act. Here we develop a deep learning approach for the continuous risk prediction of future deterioration in patients, building on recent work that models adverse events from electronic health records2-17 and using acute kidney injury-a common and potentially life-threatening condition18-as an exemplar. Our model was developed on a large, longitudinal dataset of electronic health records that cover diverse clinical environments, comprising 703,782 adult patients across 172 inpatient and 1,062 outpatient sites. Our model predicts 55.8% of all inpatient episodes of acute kidney injury, and 90.2% of all acute kidney injuries that required subsequent administration of dialysis, with a lead time of up to 48 h and a ratio of 2 false alerts for every true alert. In addition to predicting future acute kidney injury, our model provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests9. Although the recognition and prompt treatment of acute kidney injury is known to be challenging, our approach may offer opportunities for identifying patients at risk within a time window that enables early treatment. PMID: 31367026 Funding information This work was supported by: Intramural VA, United States Grant ID: VA999999</description>
      <pubDate>Sun, 25 Aug 2019 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4205983042</guid>
      <source url="https://public-governance.livingmeta.ai">Faculty Opinions – Post-Publication Peer Review of the Biomedical Literature</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>The Elasticity of Tax Compliance: Evidence from Randomized Property Tax Rates</title>
      <link>https://doi.org/10.1257/rct.3818-1.0</link>
      <description>How does tax compliance vary with the size of the tax burden when opportunities for evasion are high?This paper estimates the elasticity of property tax compliance in a field experiment in Kananga, the Democratic Republic of Congo, a setting where the status quo level of compliance is low.In collaboration with the provincial government, we randomly assign four fixed tax rates at the household level as part of a door-to-door city-wide tax collection campaign.Individuals face between 50 and 100% of their true liability.We study how compliance and total government revenues vary with the rate.We also examine the effects of randomized rates on bribe payment and contributions to informal taxes (in-kind labor payments).Our findings will contribute to knowledge about the determinants of tax compliance in weak states as well as the design of optimal liabilities and enforcement in such settings.</description>
      <pubDate>Thu, 24 Jan 2019 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4233923823</guid>
      <source url="https://public-governance.livingmeta.ai">AEA Randomized Controlled Trials</source>
      <category>accountability</category>
      <category>regulation_compliance</category>
      <category>dataset</category>
    </item>
    <item>
      <title>The Elasticity of Tax Compliance: Evidence from Randomized Property Tax Rates</title>
      <link>https://doi.org/10.1257/rct.3818</link>
      <description>How does tax compliance vary with the size of the tax burden when opportunities for evasion are high?This paper estimates the elasticity of property tax compliance in a field experiment in Kananga, the Democratic Republic of Congo, a setting where the status quo level of compliance is low.In collaboration with the provincial government, we randomly assign four fixed tax rates at the household level as part of a door-to-door city-wide tax collection campaign.Individuals face between 50 and 100% of their true liability.We study how compliance and total government revenues vary with the rate.We also examine the effects of randomized rates on bribe payment and contributions to informal taxes (in-kind labor payments).Our findings will contribute to knowledge about the determinants of tax compliance in weak states as well as the design of optimal liabilities and enforcement in such settings.</description>
      <pubDate>Thu, 24 Jan 2019 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4238147318</guid>
      <source url="https://public-governance.livingmeta.ai">AEA Randomized Controlled Trials</source>
      <category>accountability</category>
      <category>regulation_compliance</category>
      <category>dataset</category>
    </item>
    <item>
      <title>The Political Economy of Public Employee Absence: Experimental Evidence from Pakistan</title>
      <link>https://doi.org/10.1257/rct.1363</link>
      <description>Public sector absenteeism undermines service delivery in many developing countries.We report results from an at-scale randomized control evaluation in Punjab, Pakistan of a reform designed to address this problem.The reform affects healthcare for 100 million citizens across 297 political constituencies.It equips government inspectors with a smartphone monitoring system and leads to a 76% increase in inspections.However, the surge in inspections does not always translate into increased doctor attendance.The scale of the experiment permits an investigation into the mechanisms underlying this result.We find that experimentally increasing the salience of doctor absence when communicating inspection reports to senior policymakers improves subsequent doctor attendance.Next, we find that both the reform and the communication of information to senior officials are more impactful in politically competitive constituencies.Our results suggest that interactions between politicians and bureaucrats might play a critical role in shaping the success or failure of reforms.</description>
      <pubDate>Fri, 01 Jul 2016 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4244211264</guid>
      <source url="https://public-governance.livingmeta.ai">AEA Randomized Controlled Trials</source>
      <category>accountability</category>
      <category>regulation_compliance</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Highlighting Statistical Capabilities Within an Organization</title>
      <link>https://doi.org/10.1287/lytx.2024.02.06</link>
      <description>application, and dissemination of statistical science through meetings, publications</description>
      <pubDate>Thu, 04 Apr 2024 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4393965863</guid>
      <source url="https://public-governance.livingmeta.ai">LivingMeta Public Governance</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>HIPAA AUDITING IN CLOUD COMPUTING ENVIROMENT</title>
      <link>https://doi.org/10.6084/m9.figshare.990063.v2</link>
      <description>The rise of cloud computing has been driven by the benefits, the cheapest purveyor of application hosting, storage, infrastructure, huge cost savings with low initial investment, elasticity and scalability, ease of adoption, operational efficiency, on-demand resources. With all the security and Privacy Laws in the Health Care field today anyone that works with confidential information should know how to protect that information. The Health Insurance Portability and Accountability Act (HIPAA) privacy and security regulations are two crucial provisions in the protection of healthcare data. Governance, compliance and auditing are becoming as important pedagogical subjects as long established financial auditing and financial control. Designing sound IT governance, compliance, and auditing is a challenging task. This Thesis elaborates the concept of HIPAA compliance in cloud computing by taking a look at the history and dynamics and how Cloud computing changes the astir of certain parts of HIPAA Security requirements. We briefly describe the cyber warfare as a premise to enforce the reasons for complying with government regulations for information systems. The purpose of this Thesis is to explain the importance of HIPAA and research what it takes for Healthcare data to be HIPAA Compliant. Also, explaining what is expected of Healthcare industries if there is an audit and how does HIPAA Auditing play a big part in HIPAA compliance. The Cloud is a platform where all users not only store their data but also used the services and software provided by Cloud Service Provider (CSP). As we know the service provided by the cloud is very economical due to which the user pay only for what he used. This is a platform where data owner remotely store their data in the cloud to enjoy the high quality services and applications. The user can access the data, store the data and use the data. In a Corporate world there are large number of client who accessing their data and modifying a data. To manage this data we use third party auditor (TPA), that will check the reliability of data but it increases the data integrity risk of data owner. Since TPA not only read the data but also he can modify the data, therefore a novel approach should be provided who solved this problem. We first examine the problem and new potential security scheme used to solve this problem. Our algorithm encrypt the content of file at user level which ensure the data owner and client that there data are intact.</description>
      <pubDate>Wed, 01 Jan 2014 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4394168161</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>other</category>
      <category>regulation_compliance</category>
      <category>dataset</category>
    </item>
    <item>
      <title>deident: Persistent Data Anonymization Pipeline</title>
      <link>https://doi.org/10.32614/cran.package.deident</link>
      <description>A framework for the replicable removal of personally identifiable data (PID) in data sets. The package implements a suite of methods to suit different data types based on the suggestions of Garfinkel (2015) and the ICO &quot;Guidelines on Anonymization&quot; (2012) .</description>
      <pubDate>Tue, 19 Nov 2024 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W4404512888</guid>
      <source url="https://public-governance.livingmeta.ai">LivingMeta Public Governance</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>From Control-Executive Cultures to Ethic-Cybernetics: An Ontological Prognosis for Democratic Economies (2025–2045)</title>
      <link>https://doi.org/10.5281/zenodo.17532667</link>
      <description>This artifact integrates sociology, organizational science, AI governance, and ethicsto explore the emergence of “control-executive behavior” — collective obedience culturesthat arise in industries under transformational stress. It frames these as systemic,not pathological, and projects a measurable shift toward algorithmic authority reproductionwithin democracies. The work culminates in the proposal of “Ethic-Cybernetics” — adesign model in which responsibility, data, and empathy are re-coupled as the newfoundation of governance. Comparative References1. Burns, T. &amp; Stalker, G.M. (1961). The Management of Innovation.2. Hannan, M.T. &amp; Freeman, J. (1984). Structural Inertia and Organizational Change.3. Deleuze, G. (1992). Postscript on the Societies of Control.4. Yeung, K. (2018). Algorithmic Regulation: A Critical Interrogation.5. Janowski, T., Estevez, E., Roseth, B. (2024). When Does Automation in Government Thrive or Flounder?6. Arendt, H. (1963). Eichmann in Jerusalem: A Report on the Banality of Evil.7. Bandura, A. (1999). Moral Disengagement in the Perpetration of Inhumanities.8. OECD AI Observatory (2024). Algorithmic Accountability and Ethical Governance Trends.9. Fraunhofer IAO (2024). Resilienzbarometer der deutschen Wirtschaft.10. EU AI Act (2024). Annex IV – Transparency and Risk Management Requirements. ETHICAL NOTEThis work explicitly rejects the pathologization of individuals. It describes collective,structural mechanisms of control and adaptation. LICENSECC BY-ND 4.0 (ruahAI Tier System compatible)</description>
      <pubDate>Wed, 05 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7103990740</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>democratic_governance</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>From Control-Executive Cultures to Ethic-Cybernetics: An Ontological Prognosis for Democratic Economies (2025–2045)</title>
      <link>https://doi.org/10.5281/zenodo.17532666</link>
      <description>This artifact integrates sociology, organizational science, AI governance, and ethicsto explore the emergence of “control-executive behavior” — collective obedience culturesthat arise in industries under transformational stress. It frames these as systemic,not pathological, and projects a measurable shift toward algorithmic authority reproductionwithin democracies. The work culminates in the proposal of “Ethic-Cybernetics” — adesign model in which responsibility, data, and empathy are re-coupled as the newfoundation of governance. Comparative References1. Burns, T. &amp; Stalker, G.M. (1961). The Management of Innovation.2. Hannan, M.T. &amp; Freeman, J. (1984). Structural Inertia and Organizational Change.3. Deleuze, G. (1992). Postscript on the Societies of Control.4. Yeung, K. (2018). Algorithmic Regulation: A Critical Interrogation.5. Janowski, T., Estevez, E., Roseth, B. (2024). When Does Automation in Government Thrive or Flounder?6. Arendt, H. (1963). Eichmann in Jerusalem: A Report on the Banality of Evil.7. Bandura, A. (1999). Moral Disengagement in the Perpetration of Inhumanities.8. OECD AI Observatory (2024). Algorithmic Accountability and Ethical Governance Trends.9. Fraunhofer IAO (2024). Resilienzbarometer der deutschen Wirtschaft.10. EU AI Act (2024). Annex IV – Transparency and Risk Management Requirements. ETHICAL NOTEThis work explicitly rejects the pathologization of individuals. It describes collective,structural mechanisms of control and adaptation. LICENSECC BY-ND 4.0 (ruahAI Tier System compatible)</description>
      <pubDate>Wed, 05 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7104038048</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>democratic_governance</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>White Paper: Transparent Autonomy — A Governance-First Framework for Intelligent Vehicles</title>
      <link>https://doi.org/10.5281/zenodo.17552450</link>
      <description>Transparent Autonomy A Governance-First Framework for Intelligent Vehicles Author: Mark Anthony BrewerOrganization: Immortal Tek / CollectiveOS™Date: November 2025 Edition: Public Summary v1.01 1. Abstract Autonomous mobility must evolve from black-box automation to white-box governance.Immortal Tek’s CollectiveOS™ Sovereign Stack introduces a governance-first architecture for electric and intelligent vehicles that embeds transparency, accountability, and safety from silicon to software.Each autonomous decision produces a verifiable, human-readable record—creating the world’s first auditable autonomy platform.A dual-track demonstration with select OEM and AI-research partners will showcase transparent, explainable autonomy for regulatory and public validation. 2. Problem Statement Current autonomy systems excel at perception and control but fail at legibility.When failures occur, root causes are often unverifiable, eroding public trust and inflating OEM liability.True progress requires governance infrastructure inside the machine—a framework where every autonomous act can be reconstructed, reviewed, and ethically justified. 3. Immortal Tek Approach Guiding Principle: Every autonomous act must create a traceable, verifiable record. CollectiveOS Sovereign Stack is a multi-agent coordination layer that operates alongside or within existing autonomy stacks.It separates perception, navigation, arbitration, and logging into independent, mutually verifying agents to prevent single-point compromise. Governance Ledger Engine records sensor context, decision rationale, and outcome data in a tamper-evident ledger—forming an immutable chain of evidence. Proof Vault™ Protocol links physical events (braking, steering, energy changes) to digitally signed agent intents. Elastic Safety Layer pre-loads context-aware safety data, enabling rapid, traceable reactions without exposing proprietary model logic. (All algorithmic, cryptographic, and synchronization specifics remain proprietary.) 4. Dual-Track Demonstration Strategy (with Partners) Track Lead / Core Partner Objective Illustrative Funding Channel A – LTV Prototype (“Electric Warthog”) Immortal Tek (Lead) Demonstrate auditable autonomy &amp; adaptive/self-healing energy on a light tactical EV; validate on-vehicle explainability + Proof Vault logging. Public R&amp;D / Private Capital B – OEM Government POC Major OEM Defense Division (e.g., GMC Defense) + Immortal Tek Validate governance protocol, safety arbitration, and ledger replication under real-world pilots with transit/energy agencies. Federal mobility initiatives (e.g., DOE/DOT pilot programs) Independent Research Auditor (e.g., top-tier AI lab): Provides model interpretability and bias-testing toolchains against Immortal Tek’s Explainability Bridge outputs only; access remains sandboxed from proprietary firmware and ledger internals. 5. Governance Architecture Overview Sovereign AI Kernel — Localized reasoning core for perception and motion decisions, isolated from external data injection. Explainability Bridge — Translates complex model logic into concise, human-readable cause-and-effect summaries(e.g., “Hard Stop – Pedestrian Detected – Brake Applied.”) Proof Vault Ledger — Stores each summary and corresponding sensor hash as a cryptographically sealed record. Federated Oversight Nodes — Optional read-only mirrors for regulators or fleet operators, enabling real-time verification without exposing proprietary or personal data. Key-management methods and data-exchange protocols are withheld for security. 6. Strategic Impact Regulatory Trust → Flight-recorder-style audit trail for autonomy. OEM Risk Reduction → Verifiable decision logs clarify responsibility and lower liability exposure. Ethical Standardization → Enables measurable ethical policies within AI control logic. Industry Benchmark → Establishes the Orichalcum Chrome Standard™—transparency as a core design metric. 7. Partner &amp; Collaboration Framework 7.1 Roles &amp; Responsibilities Immortal Tek (Architect &amp; Integrator) Delivers CollectiveOS™ Sovereign Stack, Governance Ledger Engine, Proof Vault™. Leads system integration, validation plans, and safety-case documentation. Maintains architectural sovereignty, IP custody, and release governance. OEM Partner (e.g., GMC Defense) Provides vehicle platform (chassis, powertrain, E/E architecture), safety engineering, and compliance testing. Integrates Sovereign Stack interfaces at the vehicle network boundary (no exposure to core governance code). Independent Research Auditor (e.g., Google AI/DeepMind or equivalent) Supplies interpretability/bias-assessment tools and publishes verification reports on the Explainability Bridge outputs. Operates under strict sandboxing: no access to core firmware, keys, or raw data beyond agreed artifacts. Public-Sector / Standards Stakeholders (e.g., DOE/DOT/NHTSA, ISO/UNECE working groups) Observe via Federated Oversight Nodes and contribute to validation criteria and harmonized reporting formats. 7.2 Data-Access &amp; Privacy Tiers (Contractual) Tier 0 – On-Device Only: Raw camera/lidar/radar; actuator loops; crypto materials. (Immortal Tek + OEM, strictly local) Tier 1 – Governance Artifacts: Hashed sensor references, explainability summaries, signed agent intents. (Immortal Tek; select read for OEM/Regulators) Tier 2 – Federated Oversight: Read-only ledger mirrors with non-identifiable artifacts for regulators and auditors. (Time-bounded, revocable) Tier 3 – Public Reporting: Aggregated safety metrics and de-identified case studies. 7.3 IP &amp; Security Boundaries Retained by Immortal Tek: Governance codebase, ledger/pruning logic, Explainability Bridge internals, key management, and on-device security architecture. Shared Under NDA: Interface specs, safety-case artifacts, test plans, and validation protocols. Published Publicly: Conceptual diagrams, policies, anonymized metrics, and compliance mappings. 7.4 Collaboration Instruments MoU → JDA → SOW progression with clear RACI matrix. Safety Case &amp; Validation Plan aligned to ISO 26262/21448 with traceable evidence in Proof Vault. Cybersecurity Plan aligned to UNECE R155/R156 (secure updates, rollback, monitoring). Regulatory Engagement Plan defining scope for pilot permits, data retention, and reporting cadence. 8. Reference Implementation Concept A modular electric vehicle demonstrates the framework through: Auditable Autonomy: Every decision recorded and signed. Self-Healing Energy System: Battery health and repair events logged for traceability. AI Maintenance Companion: On-board diagnostic unit performing wireless checks and logging maintenance with digital signatures. Modular Add-Ons: Reconnaissance, utility, or medical modules attach via certified interfaces; each installation automatically registered in the ledger. 9. Compliance Alignment (Non-Exhaustive) CollectiveOS supports integration with established global standards, including: Functional Safety (ISO 26262) Safety of Intended Functionality (ISO 21448) Cybersecurity Management (UNECE R155/R156) Automated Driving System Guidelines (NHTSA ADS 2.0) All personal or proprietary data remains local; only hashed, non-identifiable artifacts are externally replicated. 10. Program Roadmap (Milestones) Q1 2026 → Internal prototype demonstration. Q2 2026 → Concept paper submission for government pilot track. Q3 2026 → Collaboration agreements &amp; auditor toolchain finalized. Q4 2026 → Public release of regulator brief &amp; pilot data summary. 11. Conclusion Transparent autonomy transforms AI-driven mobility from a black box into a verifiable civic system.By embedding governance directly into the control stack, Immortal Tek’s CollectiveOS™ establishes a durable framework for safety, accountability, and public trust in the age of intelligent transport. 12. Contact Mark Anthony BrewerFounder — Immortal Tek / CollectiveOS™✉ thecollectiveai@proton.me 13. Notice This public edition omits proprietary implementation details, cryptographic specifications, and partner agreements.All information herein is provided for policy, regulatory, and investor evaluation under standard confidentiality expectations. Public Summary v1.01 | For informational purposes only | © 2025 Immortal Tek LLC. All rights reserved.</description>
      <pubDate>Fri, 07 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7104511852</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>White Paper: Transparent Autonomy — A Governance-First Framework for Intelligent Vehicles</title>
      <link>https://doi.org/10.5281/zenodo.17552451</link>
      <description>Transparent Autonomy A Governance-First Framework for Intelligent Vehicles Author: Mark Anthony BrewerOrganization: Immortal Tek / CollectiveOS™Date: November 2025 Edition: Public Summary v1.01 1. Abstract Autonomous mobility must evolve from black-box automation to white-box governance.Immortal Tek’s CollectiveOS™ Sovereign Stack introduces a governance-first architecture for electric and intelligent vehicles that embeds transparency, accountability, and safety from silicon to software.Each autonomous decision produces a verifiable, human-readable record—creating the world’s first auditable autonomy platform.A dual-track demonstration with select OEM and AI-research partners will showcase transparent, explainable autonomy for regulatory and public validation. 2. Problem Statement Current autonomy systems excel at perception and control but fail at legibility.When failures occur, root causes are often unverifiable, eroding public trust and inflating OEM liability.True progress requires governance infrastructure inside the machine—a framework where every autonomous act can be reconstructed, reviewed, and ethically justified. 3. Immortal Tek Approach Guiding Principle: Every autonomous act must create a traceable, verifiable record. CollectiveOS Sovereign Stack is a multi-agent coordination layer that operates alongside or within existing autonomy stacks.It separates perception, navigation, arbitration, and logging into independent, mutually verifying agents to prevent single-point compromise. Governance Ledger Engine records sensor context, decision rationale, and outcome data in a tamper-evident ledger—forming an immutable chain of evidence. Proof Vault™ Protocol links physical events (braking, steering, energy changes) to digitally signed agent intents. Elastic Safety Layer pre-loads context-aware safety data, enabling rapid, traceable reactions without exposing proprietary model logic. (All algorithmic, cryptographic, and synchronization specifics remain proprietary.) 4. Dual-Track Demonstration Strategy (with Partners) Track Lead / Core Partner Objective Illustrative Funding Channel A – LTV Prototype (“Electric Warthog”) Immortal Tek (Lead) Demonstrate auditable autonomy &amp; adaptive/self-healing energy on a light tactical EV; validate on-vehicle explainability + Proof Vault logging. Public R&amp;D / Private Capital B – OEM Government POC Major OEM Defense Division (e.g., GMC Defense) + Immortal Tek Validate governance protocol, safety arbitration, and ledger replication under real-world pilots with transit/energy agencies. Federal mobility initiatives (e.g., DOE/DOT pilot programs) Independent Research Auditor (e.g., top-tier AI lab): Provides model interpretability and bias-testing toolchains against Immortal Tek’s Explainability Bridge outputs only; access remains sandboxed from proprietary firmware and ledger internals. 5. Governance Architecture Overview Sovereign AI Kernel — Localized reasoning core for perception and motion decisions, isolated from external data injection. Explainability Bridge — Translates complex model logic into concise, human-readable cause-and-effect summaries(e.g., “Hard Stop – Pedestrian Detected – Brake Applied.”) Proof Vault Ledger — Stores each summary and corresponding sensor hash as a cryptographically sealed record. Federated Oversight Nodes — Optional read-only mirrors for regulators or fleet operators, enabling real-time verification without exposing proprietary or personal data. Key-management methods and data-exchange protocols are withheld for security. 6. Strategic Impact Regulatory Trust → Flight-recorder-style audit trail for autonomy. OEM Risk Reduction → Verifiable decision logs clarify responsibility and lower liability exposure. Ethical Standardization → Enables measurable ethical policies within AI control logic. Industry Benchmark → Establishes the Orichalcum Chrome Standard™—transparency as a core design metric. 7. Partner &amp; Collaboration Framework 7.1 Roles &amp; Responsibilities Immortal Tek (Architect &amp; Integrator) Delivers CollectiveOS™ Sovereign Stack, Governance Ledger Engine, Proof Vault™. Leads system integration, validation plans, and safety-case documentation. Maintains architectural sovereignty, IP custody, and release governance. OEM Partner (e.g., GMC Defense) Provides vehicle platform (chassis, powertrain, E/E architecture), safety engineering, and compliance testing. Integrates Sovereign Stack interfaces at the vehicle network boundary (no exposure to core governance code). Independent Research Auditor (e.g., Google AI/DeepMind or equivalent) Supplies interpretability/bias-assessment tools and publishes verification reports on the Explainability Bridge outputs. Operates under strict sandboxing: no access to core firmware, keys, or raw data beyond agreed artifacts. Public-Sector / Standards Stakeholders (e.g., DOE/DOT/NHTSA, ISO/UNECE working groups) Observe via Federated Oversight Nodes and contribute to validation criteria and harmonized reporting formats. 7.2 Data-Access &amp; Privacy Tiers (Contractual) Tier 0 – On-Device Only: Raw camera/lidar/radar; actuator loops; crypto materials. (Immortal Tek + OEM, strictly local) Tier 1 – Governance Artifacts: Hashed sensor references, explainability summaries, signed agent intents. (Immortal Tek; select read for OEM/Regulators) Tier 2 – Federated Oversight: Read-only ledger mirrors with non-identifiable artifacts for regulators and auditors. (Time-bounded, revocable) Tier 3 – Public Reporting: Aggregated safety metrics and de-identified case studies. 7.3 IP &amp; Security Boundaries Retained by Immortal Tek: Governance codebase, ledger/pruning logic, Explainability Bridge internals, key management, and on-device security architecture. Shared Under NDA: Interface specs, safety-case artifacts, test plans, and validation protocols. Published Publicly: Conceptual diagrams, policies, anonymized metrics, and compliance mappings. 7.4 Collaboration Instruments MoU → JDA → SOW progression with clear RACI matrix. Safety Case &amp; Validation Plan aligned to ISO 26262/21448 with traceable evidence in Proof Vault. Cybersecurity Plan aligned to UNECE R155/R156 (secure updates, rollback, monitoring). Regulatory Engagement Plan defining scope for pilot permits, data retention, and reporting cadence. 8. Reference Implementation Concept A modular electric vehicle demonstrates the framework through: Auditable Autonomy: Every decision recorded and signed. Self-Healing Energy System: Battery health and repair events logged for traceability. AI Maintenance Companion: On-board diagnostic unit performing wireless checks and logging maintenance with digital signatures. Modular Add-Ons: Reconnaissance, utility, or medical modules attach via certified interfaces; each installation automatically registered in the ledger. 9. Compliance Alignment (Non-Exhaustive) CollectiveOS supports integration with established global standards, including: Functional Safety (ISO 26262) Safety of Intended Functionality (ISO 21448) Cybersecurity Management (UNECE R155/R156) Automated Driving System Guidelines (NHTSA ADS 2.0) All personal or proprietary data remains local; only hashed, non-identifiable artifacts are externally replicated. 10. Program Roadmap (Milestones) Q1 2026 → Internal prototype demonstration. Q2 2026 → Concept paper submission for government pilot track. Q3 2026 → Collaboration agreements &amp; auditor toolchain finalized. Q4 2026 → Public release of regulator brief &amp; pilot data summary. 11. Conclusion Transparent autonomy transforms AI-driven mobility from a black box into a verifiable civic system.By embedding governance directly into the control stack, Immortal Tek’s CollectiveOS™ establishes a durable framework for safety, accountability, and public trust in the age of intelligent transport. 12. Contact Mark Anthony BrewerFounder — Immortal Tek / CollectiveOS™✉ thecollectiveai@proton.me 13. Notice This public edition omits proprietary implementation details, cryptographic specifications, and partner agreements.All information herein is provided for policy, regulatory, and investor evaluation under standard confidentiality expectations. Public Summary v1.01 | For informational purposes only | © 2025 Immortal Tek LLC. All rights reserved.</description>
      <pubDate>Fri, 07 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7104577758</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>transparency_openness</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Trilateral Security Nexus Framework (TSNF): India&apos;s Transformative Treaty Model Integrating Law, Technology, and Development for Global Cooperative Security</title>
      <link>https://doi.org/10.5281/zenodo.17612903</link>
      <description> FINAL ZENODO DESCRIPTION — GLOBAL PREMIUM EDITION (2025) DOI: https://doi.org/10.5281/zenodo.17612903Title: Trilateral Security Nexus Framework (TSNF) — Version 2: International Master EditionAuthor: Mazumdar, B. (2025)Category: Data Set / Treaty Corpus / Computational Governance ArchitectureLicense: CC BY 4.0Version: 2.0 — Final International Master EditionRelease Status: Final, Peer-Standard, Audit-Ready  Abstract (Global Executive Summary) The Trilateral Security Nexus Framework (TSNF) — Version 2 is an internationally standardized, treaty-grade, computational security architecture designed to unify transnational security systems, AI-ethics enforcement, and sovereign digital governance under a single interoperable global model. This International Master Edition integrates: A complete, enforceable treaty corpus Cross-border governance standards AI risk-regulation modules Post-quantum security infrastructure Human-rights-anchored oversight Blockchain-verified transparency mechanisms Global South–aligned decolonial safeguards Digital-twin simulation environments Computational models (Python + LaTeX) TSNF-V2 enables nations, institutions, and researchers to build transparent, sovereign, resilient, ethically compliant, and interoperable governance systems—engineered for the geopolitical and technological challenges of 2025–2075.  Keywords Trilateral Security; Global Governance; Cooperative Security Architecture; Border Integration; Digital Sovereignty; Ethical AI Frameworks; Post-Quantum Cryptography; Sovereign Identity; Blockchain Transparency; Computational Public Policy; Human-Rights Audit Systems; Global South Governance; Open Standards; FAIR + D Data; Digital Twins; Risk Anticipation Models.  1. Purpose &amp; Rationale TSNF-V2 addresses three foundational global governance needs: 1.1 Transnational &amp; Hybrid Security Threats Cyber-physical intrusion, AI-driven disinformation, autonomous weaponization, asymmetric conflict, climate-induced displacement. 1.2 Digital Sovereignty, Trust &amp; Accountability Verifiable public data, algorithmic transparency, explainable AI, anti-colonial and sovereignty-preserving digital infrastructures. 1.3 Global Interoperability Across Systems &amp; Cultures A universal yet sovereignty-respecting standard for nations with diverse legal, technological, and socio-economic architectures. Optimized for resource-constrained, developing, and postcolonial states, while remaining fully compatible with Global North systems. ️ 2. Core TSNF Architecture 2.1 Trilateral Security Model Three interconnected pillars: Operational &amp; Border Security (OpSec) Socio-Economic Stability &amp; Risk Analytics Ethical AI &amp; Digital Governance Oversight 2.2 Post-Quantum Cryptographic Integrity Layer Dilithium, Falcon, SPHINCS+ Zero-trust sovereign identity Distributed blockchain lineage Policy-grade 50-year cryptographic durability 2.3 AI-Driven Policy Intelligence Engine Multi-modal inference and behavioural modelling Early-warning socio-political stress mapping Ethical constraints aligned with UNESCO–OECD–NIST Fully explainable, audit-ready inference pipelines 2.4 TSNF Governance Compass Integrates constitutional law, human-rights protection, climate resilience, civilian security, and evidence-driven decision protocols.  3. Key Innovations in Version 2 Trilateral Cooperative Security Protocol (TCSP-2) Systemic Risk Intelligence Model (SRIM-2) — enhanced Distributed Multilateral Trust Fabric (D-MTF) Quantum Threat Anticipation Maps (QTAM-2) Blockchain Transparency Ledger (BTL) Global South Sovereignty Module (GSSM) Hyper-fidelity Digital Twin Simulations Treaty-grade legality with computational enforceability ️ 4. Dataset Composition Includes: Full Treaty Corpus (Chapters 1–11) 22 High-Resolution Annexes (A–U) System-architecture diagrams Security-protocol matrices GIS-linked governance visualizations Python + LaTeX algorithmic pipelines Sovereign identity models Verification and compliance tables Technical glossaries and schema references Suitable for: National security institutions Policy research and analysis Legal and governance studies AI safety assessments International cooperation Reproducible computational modelling  5. Methodology Multi-domain systems modelling Behavioural risk analytics PQC-aligned cryptographic engineering Digital-twin simulations Blockchain audit trails Comparative treaty analysis FAIR + D (Decolonial) data standards Global South–centric governance benchmarking ️ 6. Technical Specifications PQC: Dilithium, Falcon, SPHINCS+ Blockchain: Distributed transparency ledger AI: Interpretable risk-inference engine with ethics filters Data Standards: JSON-LD, GeoJSON, W3C DIDs, FAIR schemas Security Model: Zero-trust + sovereign identity  7. Primary Audiences National Security Councils International policy bodies Intelligence &amp; cybersecurity agencies Digital governance authorities Universities &amp; research laboratories Multilateral organizations (UN, AU, EU, ASEAN, BIMSTEC, SCO) AI-ethics certification institutions  8. Global Significance TSNF-V2 equips states to: Build interoperable global security systems Strengthen digital sovereignty Reduce systemic and cross-border risks Prevent AI misuse and geopolitical escalation Establish transparent governance infrastructures Enhance national and regional resilience Recognized as one of the most advanced open-access public-policy security frameworks of its generation.  9. Recommended Citation APA 7th Edition:Mazumdar, B. (2025). Trilateral Security Nexus Framework (TSNF) — Version 2: International Master Edition [Data set]. Zenodo. https://doi.org/10.5281/zenodo.17612903  10. Cryptographic Integrity Validation (Generated via OpenSSL 3.0) MD5: b3aeaa616f6776e6361dad5cb4500ab9 SHA-1: 8d59bc26e25b4f23bd50f31f1d58defc6b5d5dfe SHA-256: 88daa2b87b6afa90dd7196af6812af420b0c4047996f48e97c23c27b61c33c7 SHA-384: 4df4657e52c59b001281ff39d09854d3d015aac494421c603607cec5c2f7b73eab06bda5c78f136af4e0d7ae84e6ba92 SHA-512: 299a648a71e09bc6b5c3bfcaed67efff448a0fe9c3be46043e6d370307c3132042555e39931e02279d7b468e6341d3be3944d322aeb64b3cbe0de2507513b9e1 CRC32: af3749bd  11. Ethical–Legal Compliance Fully aligned with UNESCO (2021) AI Ethics, OECD AI Principles, IHL, GDPR Contains no classified, sensitive, or personal data 100% open-access, reproducible, and audit-ready Developed using FAIR + D (Decolonial) data principles Produced independently with no institutional influence  12. Funding No external or institutional funding.Developed entirely through independent research.  13. Software Availability All computational models (Python + LaTeX) are provided in:Annex H (II): Computational Policy ModelsFully reproducible and version-controlled.</description>
      <pubDate>Sat, 15 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7105782702</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>multi_domain</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>SkyHauler™: The ARPC-Powered Heavy-Lift Urban Cargo Aircraft</title>
      <link>https://doi.org/10.5281/zenodo.17769053</link>
      <description>SkyHauler™: The ARPC-Powered Heavy-Lift Urban Cargo Aircraft White Paper | Public Release v1.0 Certified: CollectiveOS | Governance: GATA PRIME-Aligned | License: COHL-1.0 + CERN-OHL v2 Date: November 2025 1. Executive Summary The global logistical infrastructure stands at a precipice. The convergence of rapid urbanization, just-in-time supply chain fragility, and the urgent necessity for decarbonization has exposed a critical gap in our transport capabilities. We possess efficient long-haul air freight and ubiquitous last-mile van delivery, yet the &quot;middle mile&quot;—the rapid, point-to-point transport of heavy, critical payloads (50kg to 150kg) across complex urban and alpine environments—remains unsolved. Traditional solutions are failing: helicopters are economically and acoustically prohibitive, while conventional electric drones are shackled by the thermodynamic limitations of lithium-ion chemistry, rendering them incapable of meaningful heavy-lift operations beyond trivial ranges. This White Paper introduces SkyHauler™, a heavy-lift Unmanned Aerial System (UAS) that fundamentally resolves this paralysis. SkyHauler is not merely an iterative improvement in drone technology; it is the first kinetic application of the Janus-Era Scientific Framework. By fusing the thermodynamic abundance of the Adaptive Resonance Power Cell (ARPC) with the cryptographic safety of the Patent-Free Science (PFS) governance layer, SkyHauler achieves a performance envelope previously deemed physically impossible for electric vertical takeoff and landing (eVTOL) aircraft. The SkyHauler platform is engineered to deliver a rated payload of 100 kilograms over operational radii exceeding 120 kilometers, operating reliably in thermal extremes from -35°C to +78°C. This capability is not theoretical; it is the direct result of integrating the ARPC Primary System, which delivers an energy density of 600–900 Wh/kg—a 3.1x multiplier over the industry-standard lithium-polymer baseline.1 By decoupling power density from energy density via a structural Supercapacitor Lattice, and recovering waste heat through Quantum Metal thermal engines, SkyHauler transcends the &quot;range anxiety&quot; that has historically grounded electric heavy-lift ambitions. However, physical capability alone is insufficient in an era of heightened geopolitical tension and safety consciousness. The deployment of heavy-lift autonomous systems raises legitimate concerns regarding dual-use proliferation, public safety, and algorithmic accountability. SkyHauler addresses these through a radical Governance-as-Code architecture. It is the first commercially locked, Zero-Trust logistics platform governed by CollectiveOS. Flight authorization, payload verification, and airspace compliance are not discretionary choices made by a pilot; they are cryptographic constraints enforced by the GATA PRIME hardware security module. Every component, from the carbon fiber weave of the airframe to the Living Fibonacci Engine (LFE) flight control laws, is verified via the Foundational Recognition Protocol (FRP), creating an immutable lineage of accountability stored in the WORM Proof Vault.1 This document provides a comprehensive technical, operational, and economic analysis of the SkyHauler system. It details the Coaxial X8-H airframe dynamics, the quantum-thermodynamic cycles of the ARPC energy core, and the Constraint-Native avionics that allow the aircraft to &quot;surf&quot; atmospheric turbulence rather than fight it. Furthermore, it outlines the Human Global Science Collective (HGSC) diplomatic framework that allows this powerful technology to be distributed as a global public good—protected from patent enclosure and weaponization—ensuring that the future of logistics is built on abundance, transparency, and verified trust. 2. Introduction: The Kinetic and Governance Gap in Heavy-Lift Logistics The trajectory of urban air mobility (UAM) has been defined by a persistent chasm between promise and physics. For a decade, the industry has heralded the arrival of &quot;flying delivery trucks,&quot; yet operational reality has been limited to lightweight deliveries of coffee and defibrillators. The reason for this stagnation is structural: the incumbent technological stack forces engineers to choose between range, payload, and safety, while the incumbent legal stack forces them to choose between openness and profitability. The Janus Era—characterized by constraint-native computation and post-scarcity thermodynamics—demands that we reject these trade-offs.1 2.1 The Thermodynamic Ceiling: Why Batteries Fail The fundamental bottleneck of electric aviation is the Specific Energy of the storage medium. Conventional Nickel-Manganese-Cobalt (NMC) and Lithium-Iron-Phosphate (LFP) chemistries have plateaued at an energy density of approximately 250–300 Wh/kg.1 In the context of a 250kg Maximum Take-Off Weight (MTOW) aircraft, this density imposes a brutal penalty: to achieve a flight time of 30 minutes, the battery mass must exceed the payload mass. This results in a &quot;parasitic loop&quot; where the aircraft expends the majority of its energy simply lifting its own fuel source. Furthermore, liquid-electrolyte batteries are thermally fragile. In the Swiss Alps, where the strategic &quot;Winter Energy Gap&quot; demands reliable logistics during freezing conditions, standard Li-ion cells suffer catastrophic voltage sag. At -20°C, the internal resistance of an LFP cell spikes, reducing accessible capacity by over 50% and increasing the risk of lithium plating and dendrite formation.1 This thermal limitation effectively grounds electric logistics fleets during the seasons when they are most needed, rendering them useless for critical humanitarian or alpine supply missions. 2.2 The Governance Failure: The Risk of &quot;Black Box&quot; Autonomy Parallel to the hardware limitations is a crisis of trust. The traditional aerospace model relies on &quot;Security through Obscurity&quot;—proprietary flight controllers, closed-source Battery Management Systems (BMS), and patented operational logic. In a world of increasing cyber-physical threats, this opacity is a liability. Operators cannot verify if a drone’s obstacle avoidance code has a bias, regulators cannot audit the decision-making logic of an autonomous agent, and the public has no assurance that a 250kg flying object is safe beyond the manufacturer’s self-certification.1 Moreover, the patent system itself acts as a friction brake on innovation. By enclosing critical safety features—such as redundant motor mixing algorithms or battery thermal runaway protection—behind IP paywalls, the industry fragments into non-interoperable silos. This prevents the emergence of a standardized, globally verified safety architecture, leaving the skies vulnerable to &quot;lowest bidder&quot; technologies that prioritize cost over constraint-aligned safety.1 2.3 The SkyHauler Paradigm: Constraint-Native &amp; Patent-Free SkyHauler resolves these dual failures by integrating two radical paradigm shifts derived from the Janus Scientific Framework: Thermodynamic Shift: SkyHauler abandons legacy batteries for the ARPC Primary System. By utilizing self-healing silicon anodes and quantum-thermal regeneration, it breaks the 350 Wh/kg regulatory and physical barrier, achieving densities of 600–900 Wh/kg.1 This allows the aircraft to carry 100kg payloads over 100km distances—a metric that fundamentally alters the economics of logistics. Governance Shift: SkyHauler is the flagship platform for Patent-Free Science v2.0. It is governed by CollectiveOS, a decentralized operating system that enforces safety constraints at the kernel level. Its design is defensively published in the Collective Public Registry (CPR), preventing patent trolling, while its operation is restricted by GATA PRIME to verified, peaceful, and public-safe missions.1 This White Paper serves as the definitive manual for this new era of logistics. It is not just a spec sheet; it is a blueprint for a world where heavy-lift capability is a ubiquitous, safe, and open utility. 3. Platform Architecture: The SkyHauler X8-H Configuration The SkyHauler airframe is designed according to the principles of &quot;Functional Brutalism.&quot; In heavy-lift logistics, aesthetic considerations are secondary to torque authority, structural rigidity, and redundant reliability. The vehicle must survive the chaotic wind shear of urban canyons and the icing conditions of alpine passes while maintaining a compact footprint for vertiport integration. 3.1 Coaxial X8-H Octocopter Dynamics SkyHauler utilizes a Coaxial X8 configuration—four arms, with eight motors mounted in an over-under (contra-rotating) setup. This topology was selected over the standard flat-octocopter or hexacopter designs for three critical reasons governed by urban constraints: Footprint Efficiency: A standard flat-octocopter capable of lifting 250kg would require a diameter exceeding 3.5 meters, rendering it incompatible with standard urban landing pads (often 5m x 5m) or hospital rooftops. The Coaxial X8 configuration concentrates the thrust column, delivering the lift of an octocopter within the footprint of a large quadcopter (approx. 1.9m wheelbase).3 This allows SkyHauler to operate in constrained &quot;pop-up&quot; landing zones in disaster areas or dense city centers. Redundancy and Yaw Authority: The primary failure mode for multirotors is propulsion loss. In a flat-hexacopter, the loss of a single motor reduces yaw authority significantly, often necessitating an immediate, uncontrolled descent. In the SkyHauler X8, if a top motor fails, the bottom motor on the same coaxial axis can instantaneously increase RPM to compensate for the lost thrust and yaw torque. The Janus Flight Controller (discussed in Section 5) detects the torque imbalance in microseconds and adjusts the remaining seven motors to maintain stable hover and controlled descent capabilities.</description>
      <pubDate>Sun, 30 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7108068074</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>regulation_compliance</category>
      <category>dataset</category>
    </item>
    <item>
      <title>SkyHauler™: The ARPC-Powered Heavy-Lift Urban Cargo Aircraft</title>
      <link>https://doi.org/10.5281/zenodo.17769052</link>
      <description>SkyHauler™: The ARPC-Powered Heavy-Lift Urban Cargo Aircraft White Paper | Public Release v1.0 Certified: CollectiveOS | Governance: GATA PRIME-Aligned | License: COHL-1.0 + CERN-OHL v2 Date: November 2025 1. Executive Summary The global logistical infrastructure stands at a precipice. The convergence of rapid urbanization, just-in-time supply chain fragility, and the urgent necessity for decarbonization has exposed a critical gap in our transport capabilities. We possess efficient long-haul air freight and ubiquitous last-mile van delivery, yet the &quot;middle mile&quot;—the rapid, point-to-point transport of heavy, critical payloads (50kg to 150kg) across complex urban and alpine environments—remains unsolved. Traditional solutions are failing: helicopters are economically and acoustically prohibitive, while conventional electric drones are shackled by the thermodynamic limitations of lithium-ion chemistry, rendering them incapable of meaningful heavy-lift operations beyond trivial ranges. This White Paper introduces SkyHauler™, a heavy-lift Unmanned Aerial System (UAS) that fundamentally resolves this paralysis. SkyHauler is not merely an iterative improvement in drone technology; it is the first kinetic application of the Janus-Era Scientific Framework. By fusing the thermodynamic abundance of the Adaptive Resonance Power Cell (ARPC) with the cryptographic safety of the Patent-Free Science (PFS) governance layer, SkyHauler achieves a performance envelope previously deemed physically impossible for electric vertical takeoff and landing (eVTOL) aircraft. The SkyHauler platform is engineered to deliver a rated payload of 100 kilograms over operational radii exceeding 120 kilometers, operating reliably in thermal extremes from -35°C to +78°C. This capability is not theoretical; it is the direct result of integrating the ARPC Primary System, which delivers an energy density of 600–900 Wh/kg—a 3.1x multiplier over the industry-standard lithium-polymer baseline.1 By decoupling power density from energy density via a structural Supercapacitor Lattice, and recovering waste heat through Quantum Metal thermal engines, SkyHauler transcends the &quot;range anxiety&quot; that has historically grounded electric heavy-lift ambitions. However, physical capability alone is insufficient in an era of heightened geopolitical tension and safety consciousness. The deployment of heavy-lift autonomous systems raises legitimate concerns regarding dual-use proliferation, public safety, and algorithmic accountability. SkyHauler addresses these through a radical Governance-as-Code architecture. It is the first commercially locked, Zero-Trust logistics platform governed by CollectiveOS. Flight authorization, payload verification, and airspace compliance are not discretionary choices made by a pilot; they are cryptographic constraints enforced by the GATA PRIME hardware security module. Every component, from the carbon fiber weave of the airframe to the Living Fibonacci Engine (LFE) flight control laws, is verified via the Foundational Recognition Protocol (FRP), creating an immutable lineage of accountability stored in the WORM Proof Vault.1 This document provides a comprehensive technical, operational, and economic analysis of the SkyHauler system. It details the Coaxial X8-H airframe dynamics, the quantum-thermodynamic cycles of the ARPC energy core, and the Constraint-Native avionics that allow the aircraft to &quot;surf&quot; atmospheric turbulence rather than fight it. Furthermore, it outlines the Human Global Science Collective (HGSC) diplomatic framework that allows this powerful technology to be distributed as a global public good—protected from patent enclosure and weaponization—ensuring that the future of logistics is built on abundance, transparency, and verified trust. 2. Introduction: The Kinetic and Governance Gap in Heavy-Lift Logistics The trajectory of urban air mobility (UAM) has been defined by a persistent chasm between promise and physics. For a decade, the industry has heralded the arrival of &quot;flying delivery trucks,&quot; yet operational reality has been limited to lightweight deliveries of coffee and defibrillators. The reason for this stagnation is structural: the incumbent technological stack forces engineers to choose between range, payload, and safety, while the incumbent legal stack forces them to choose between openness and profitability. The Janus Era—characterized by constraint-native computation and post-scarcity thermodynamics—demands that we reject these trade-offs.1 2.1 The Thermodynamic Ceiling: Why Batteries Fail The fundamental bottleneck of electric aviation is the Specific Energy of the storage medium. Conventional Nickel-Manganese-Cobalt (NMC) and Lithium-Iron-Phosphate (LFP) chemistries have plateaued at an energy density of approximately 250–300 Wh/kg.1 In the context of a 250kg Maximum Take-Off Weight (MTOW) aircraft, this density imposes a brutal penalty: to achieve a flight time of 30 minutes, the battery mass must exceed the payload mass. This results in a &quot;parasitic loop&quot; where the aircraft expends the majority of its energy simply lifting its own fuel source. Furthermore, liquid-electrolyte batteries are thermally fragile. In the Swiss Alps, where the strategic &quot;Winter Energy Gap&quot; demands reliable logistics during freezing conditions, standard Li-ion cells suffer catastrophic voltage sag. At -20°C, the internal resistance of an LFP cell spikes, reducing accessible capacity by over 50% and increasing the risk of lithium plating and dendrite formation.1 This thermal limitation effectively grounds electric logistics fleets during the seasons when they are most needed, rendering them useless for critical humanitarian or alpine supply missions. 2.2 The Governance Failure: The Risk of &quot;Black Box&quot; Autonomy Parallel to the hardware limitations is a crisis of trust. The traditional aerospace model relies on &quot;Security through Obscurity&quot;—proprietary flight controllers, closed-source Battery Management Systems (BMS), and patented operational logic. In a world of increasing cyber-physical threats, this opacity is a liability. Operators cannot verify if a drone’s obstacle avoidance code has a bias, regulators cannot audit the decision-making logic of an autonomous agent, and the public has no assurance that a 250kg flying object is safe beyond the manufacturer’s self-certification.1 Moreover, the patent system itself acts as a friction brake on innovation. By enclosing critical safety features—such as redundant motor mixing algorithms or battery thermal runaway protection—behind IP paywalls, the industry fragments into non-interoperable silos. This prevents the emergence of a standardized, globally verified safety architecture, leaving the skies vulnerable to &quot;lowest bidder&quot; technologies that prioritize cost over constraint-aligned safety.1 2.3 The SkyHauler Paradigm: Constraint-Native &amp; Patent-Free SkyHauler resolves these dual failures by integrating two radical paradigm shifts derived from the Janus Scientific Framework: Thermodynamic Shift: SkyHauler abandons legacy batteries for the ARPC Primary System. By utilizing self-healing silicon anodes and quantum-thermal regeneration, it breaks the 350 Wh/kg regulatory and physical barrier, achieving densities of 600–900 Wh/kg.1 This allows the aircraft to carry 100kg payloads over 100km distances—a metric that fundamentally alters the economics of logistics. Governance Shift: SkyHauler is the flagship platform for Patent-Free Science v2.0. It is governed by CollectiveOS, a decentralized operating system that enforces safety constraints at the kernel level. Its design is defensively published in the Collective Public Registry (CPR), preventing patent trolling, while its operation is restricted by GATA PRIME to verified, peaceful, and public-safe missions.1 This White Paper serves as the definitive manual for this new era of logistics. It is not just a spec sheet; it is a blueprint for a world where heavy-lift capability is a ubiquitous, safe, and open utility. 3. Platform Architecture: The SkyHauler X8-H Configuration The SkyHauler airframe is designed according to the principles of &quot;Functional Brutalism.&quot; In heavy-lift logistics, aesthetic considerations are secondary to torque authority, structural rigidity, and redundant reliability. The vehicle must survive the chaotic wind shear of urban canyons and the icing conditions of alpine passes while maintaining a compact footprint for vertiport integration. 3.1 Coaxial X8-H Octocopter Dynamics SkyHauler utilizes a Coaxial X8 configuration—four arms, with eight motors mounted in an over-under (contra-rotating) setup. This topology was selected over the standard flat-octocopter or hexacopter designs for three critical reasons governed by urban constraints: Footprint Efficiency: A standard flat-octocopter capable of lifting 250kg would require a diameter exceeding 3.5 meters, rendering it incompatible with standard urban landing pads (often 5m x 5m) or hospital rooftops. The Coaxial X8 configuration concentrates the thrust column, delivering the lift of an octocopter within the footprint of a large quadcopter (approx. 1.9m wheelbase).3 This allows SkyHauler to operate in constrained &quot;pop-up&quot; landing zones in disaster areas or dense city centers. Redundancy and Yaw Authority: The primary failure mode for multirotors is propulsion loss. In a flat-hexacopter, the loss of a single motor reduces yaw authority significantly, often necessitating an immediate, uncontrolled descent. In the SkyHauler X8, if a top motor fails, the bottom motor on the same coaxial axis can instantaneously increase RPM to compensate for the lost thrust and yaw torque. The Janus Flight Controller (discussed in Section 5) detects the torque imbalance in microseconds and adjusts the remaining seven motors to maintain stable hover and controlled descent capabilities.</description>
      <pubDate>Sun, 30 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7108092689</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>regulation_compliance</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Bridging Data Protection and AI Ethics: A Two-Study Empirical Examination of LGPD Principles and Ethical AI in Brazil</title>
      <link>https://doi.org/10.5281/zenodo.17847734</link>
      <description>Context: Data protection laws and AI ethics frameworks are increasingly invoked to govern the risks of data-driven and algorithmic systems, but there is still limited empirical evidence on how practitioners and students perceive the relationship between legal principles (such as the Brazilian LGPD) and ethical principles for Artificial Intelligence (AI). Goal: This study investigates how LGPD principles are interpreted as a foundation for ethical AI, examining perceived alignments, practical challenges, regulatory gaps, and expectations for the evolution of AI governance in Brazil. Method: We conducted two complementary survey based studies. Study 1 collected responses from 30 computing students, exploring their perceptions of privacy, transparency, security, data minimization, and accountability in AI systems. Study 2 extended this investigation with 100 participants (students and professionals in diverse software project roles), using paired LGPD AI items, Likert-scale questions on sufficiency and complementarity, and open-ended questions analyzed through inductive content analysis. Results: Across both studies, participants consistently perceived strong conceptual alignment between LGPD principles and AI ethical principles, especially regarding privacy, transparency, security, prevention, non-discrimination, and accountability. However, they also reported important gaps, particularly in explainability, fairness and bias mitigation, inclusion and diversity, solidarity, and other human centered values, as well as uncertainty about accountability for automated decisions and the fit of static legal principles to dynamic AI systems. Conclusion: The findings indicate that LGPD is viewed as a solid but insufficient foundation for ethical AI. Participants expect LGPD to be complemented or updated by AI specific ethical and regulatory frameworks, governance mechanisms, and technical measures such as explainable AI and algorithmic auditing, pointing to the need for an integrated ecosystem for responsible AI in Brazil.</description>
      <pubDate>Sun, 07 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7110063392</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>ethics_regulation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Bridging Data Protection and AI Ethics: A Two-Study Empirical Examination of LGPD Principles and Ethical AI in Brazil</title>
      <link>https://doi.org/10.5281/zenodo.17847735</link>
      <description>Context: Data protection laws and AI ethics frameworks are increasingly invoked to govern the risks of data-driven and algorithmic systems, but there is still limited empirical evidence on how practitioners and students perceive the relationship between legal principles (such as the Brazilian LGPD) and ethical principles for Artificial Intelligence (AI). Goal: This study investigates how LGPD principles are interpreted as a foundation for ethical AI, examining perceived alignments, practical challenges, regulatory gaps, and expectations for the evolution of AI governance in Brazil. Method: We conducted two complementary survey based studies. Study 1 collected responses from 30 computing students, exploring their perceptions of privacy, transparency, security, data minimization, and accountability in AI systems. Study 2 extended this investigation with 100 participants (students and professionals in diverse software project roles), using paired LGPD AI items, Likert-scale questions on sufficiency and complementarity, and open-ended questions analyzed through inductive content analysis. Results: Across both studies, participants consistently perceived strong conceptual alignment between LGPD principles and AI ethical principles, especially regarding privacy, transparency, security, prevention, non-discrimination, and accountability. However, they also reported important gaps, particularly in explainability, fairness and bias mitigation, inclusion and diversity, solidarity, and other human centered values, as well as uncertainty about accountability for automated decisions and the fit of static legal principles to dynamic AI systems. Conclusion: The findings indicate that LGPD is viewed as a solid but insufficient foundation for ethical AI. Participants expect LGPD to be complemented or updated by AI specific ethical and regulatory frameworks, governance mechanisms, and technical measures such as explainable AI and algorithmic auditing, pointing to the need for an integrated ecosystem for responsible AI in Brazil.</description>
      <pubDate>Sun, 07 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7110216578</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>ethics_regulation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Supplementary file 1_Educators’ reflections on AI-automated feedback in higher education: a structured integrative review of potentials, pitfalls, and ethical dimensions.docx</title>
      <link>https://doi.org/10.3389/feduc.2025.1704820.s001</link>
      <description>The rapid incorporation of artificial intelligence (AI) into higher education has established automated feedback systems as both a potential benefit and a challenge. Accordingly, this systematic study synthesizes the findings of 37 empirical investigations (2014–2024) to underscore the significance of teachers’ perspectives, which are sometimes overlooked in the use of AI-mediated feedback. Research indicates that AI can enhance customization, deliver immediate feedback, optimize repetitive processes, and increase student engagement. Nonetheless, these advantages are persistently compromised by concerns regarding algorithmic bias, data privacy, the deterioration of teacher-student relationships, and inadequate professional growth. The current evidence base is methodologically deficient, predominantly including short-term research or subjective evaluations, with just a limited number providing longitudinal data or controlled comparisons. This research distinguishes itself from previous evaluations that emphasize technology attributes or student results by integrating the FATE framework (Fairness, Accountability, Transparency, Ethics) with adoption models (TAM/UTAUT). It redefines educators as proactive mediators whose ethical choices and professional identities influence the optimal integration of AI. Thus, it contends that AI feedback should enhance, rather than replace, human teaching, and that its ongoing application depends on professional growth and strong governance frameworks. Future research must focus on longitudinal, cross-cultural, and outcome-validated approaches to shift the profession from experimental excitement to evidence-based educational change.</description>
      <pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7111054345</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>other</category>
      <category>ethics_regulation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Supplementary file 2_Educators’ reflections on AI-automated feedback in higher education: a structured integrative review of potentials, pitfalls, and ethical dimensions.docx</title>
      <link>https://doi.org/10.3389/feduc.2025.1704820.s002</link>
      <description>The rapid incorporation of artificial intelligence (AI) into higher education has established automated feedback systems as both a potential benefit and a challenge. Accordingly, this systematic study synthesizes the findings of 37 empirical investigations (2014–2024) to underscore the significance of teachers’ perspectives, which are sometimes overlooked in the use of AI-mediated feedback. Research indicates that AI can enhance customization, deliver immediate feedback, optimize repetitive processes, and increase student engagement. Nonetheless, these advantages are persistently compromised by concerns regarding algorithmic bias, data privacy, the deterioration of teacher-student relationships, and inadequate professional growth. The current evidence base is methodologically deficient, predominantly including short-term research or subjective evaluations, with just a limited number providing longitudinal data or controlled comparisons. This research distinguishes itself from previous evaluations that emphasize technology attributes or student results by integrating the FATE framework (Fairness, Accountability, Transparency, Ethics) with adoption models (TAM/UTAUT). It redefines educators as proactive mediators whose ethical choices and professional identities influence the optimal integration of AI. Thus, it contends that AI feedback should enhance, rather than replace, human teaching, and that its ongoing application depends on professional growth and strong governance frameworks. Future research must focus on longitudinal, cross-cultural, and outcome-validated approaches to shift the profession from experimental excitement to evidence-based educational change.</description>
      <pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7111325442</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>other</category>
      <category>ethics_regulation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Supplementary file 1_Explainable machine learning to predict the cost of capital.pdf</title>
      <link>https://doi.org/10.3389/frai.2025.1578190.s001</link>
      <description>This study investigates the impact of financial and non-financial factors on a firm&apos;s ex-ante cost of capital, which is the reflection of investors&apos; perception on a firm&apos;s riskiness. Departing from previous literature, we apply the XGBoost algorithm and two explainable Artificial Intelligence methods, namely the Shapley value approach and Lorenz Model Selection to a sample of more than 1,400 listed companies worldwide. Results confirm the relevance of key financial indicators such as firm size, ROE, firm portfolio risk, but also individuate firm&apos;s non-financial features and country&apos;s institutional quality as relevant predictors for the cost of capital. These results suggest the importance of non-financial indicators and country institutional quality on the firm&apos;s ex-ante cost of equity that expresses investors&apos; risk perception. Our findings pave the way for future investigations on the impact of ESG and country factors in predicting the cost of capital.</description>
      <pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7111388998</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Data Sheet 1_Testing the applicability of a governance checklist for high-risk AI-based learning outcome assessment in Italian universities under the EU AI act annex III.pdf</title>
      <link>https://doi.org/10.3389/frai.2025.1718613.s001</link>
      <description>Background The EU AI Act classifies AI-based learning outcome assessment as high-risk (Annex III, point 3b), yet sector-specific frameworks for institutional self-assessment remain underdeveloped. This creates accountability gaps affecting student rights and educational equity, as institutions lack systematic tools to demonstrate that algorithmic assessment systems produce valid and fair outcomes. Methods This exploratory study tests whether ALTAI’s trustworthy AI requirements can be operationalized for educational assessment governance through the XAI-ED Consequential Assessment Framework, which integrates three educational evaluation theories (Messick’s consequential validity, Kirkpatrick’s four-level model, Stufflebeam’s CIPP). Following pilot testing with three institutions, four independent coders applied a 27-item checklist to policy documents from 14 Italian universities (13% with formal AI policies plus one baseline case) using four-point ordinal scoring and structured consensus procedures. Results Intercoder reliability analysis revealed substantial agreement (Fleiss’s κ = 0.626, Krippendorff’s α = 0.838), with higher alpha reflecting predominantly adjacent-level disagreements suitable for exploratory validation. Analysis of 14 universities reveals substantial governance heterogeneity among early adopters (Institutional Index: 0.00–60.32), with Technical Robustness and Safety showing lowest implementation (M = 19.64, SD = 21.08) and Societal Well-being highest coverage (M = 52.38, SD = 29.38). Documentation prioritizes aspirational statements over operational mechanisms, with only 13% of Italian institutions having adopted AI policies by September 2025. Discussion The framework demonstrates feasibility for self-assessment but reveals critical misalignment: universities document aspirational commitments more readily than technical safeguards, with particularly weak capacity for validity testing and fairness monitoring. Findings suggest three interventions: (1) ministerial operational guidance translating EU AI Act requirements into educational contexts, (2) inter-institutional capacity-building addressing technical-pedagogical gaps, and (3) integration of AI governance indicators into national quality assurance systems to enable systematic accountability. The study contributes to understanding how educational evaluation theory can inform the translation of abstract trustworthy AI principles into outcome-focused institutional practices under high-risk classifications.</description>
      <pubDate>Thu, 11 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7114986346</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>digital_governance</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Quantum Veil Protocol (QVP): A Five-Pillar Post-Quantum Defensive Architecture for AI Governance, Cybersecurity, and Mission-Critical Digital Statecraft</title>
      <link>https://doi.org/10.5281/zenodo.17302170</link>
      <description>Quantum Veil Protocol (QVP) Final Ultimate Complete Edition — v1.0++ DOI: https://doi.org/10.5281/zenodo.17302170 Description The Quantum Veil Protocol (QVP) is an international, sovereign-grade defensive digital architecture engineered to secure national and mission-critical systems in the post-quantum era. This work presents a rigorously governed, ethically constrained, and legally aligned framework that integrates post-quantum cryptography (PQC), AI-assisted governance, and zero-downtime operational continuity into a unified security doctrine. QVP is not a conventional cybersecurity paper. It is a state-level architectural blueprint designed for long-term digital sovereignty, resilience against quantum-era threats, and strict compliance with evolving global governance and legal norms. The protocol is strictly defensive in nature, explicitly prohibiting offensive cyber operations, autonomous escalation, or unaccountable artificial intelligence behavior. Five-Pillar Sovereign Architecture QVP is structured around a formally defined five-pillar architecture, each component independently auditable yet systemically integrated: Veil Layer — An entropy-driven, dynamically mutating network topology designed to defeat reconnaissance, lateral movement, traffic inference, and structural mapping attacks. Echo Shield — A behavioral, AI-assisted anomaly detection and reflexive defense layer operating exclusively within legal, ethical, and governance constraints. Nexus Core — A federated intelligence synthesis engine enabling cross-sector situational awareness through zero-knowledge information exchange while preserving sovereignty and privacy. Shadow-Plane — A quantum-secure, live-mirror failover fabric ensuring uninterrupted mission continuity during catastrophic cyber, systemic, or infrastructure-level events. Ethics Anchor — A formal governance and oversight framework enforcing legal accountability, cultural constraints, and mandatory human–AI dual authorization across all automated and assisted decisions. Technical and Governance Scope This Final Ultimate Complete Edition includes: Formal mathematical models and system-state transition frameworks STRIDE-based and MITRE ATT&amp;CK–aligned threat modeling Post-quantum algorithm selection and justification (Kyber, Dilithium, SPHINCS+) Digital-twin simulations and multi-dimensional stress-testing results Compliance mapping to NIST SP 800-53, ISO/IEC 27001, EU AI Act, and CCDCOE norms National-scale integration models spanning Defense, Energy, Intelligence, and Critical Infrastructure sectors Cross-cultural governance harmonization enabling globally deployable yet fully sovereign implementations Edition Statement The Final Ultimate Complete Edition — v1.0++ represents a stable, complete, and archival-grade release. It is intended as a long-term reference framework rather than an experimental draft. All architectural claims, governance constraints, and ethical safeguards are explicitly defined, bounded, and auditable. The protocol enforces: Defensive-only operational posture Mandatory human–AI dual authorization Continuous auditability and legal accountability Prohibition of autonomous or offensive cyber actions Intended Audience This work is intended for: Researchers in post-quantum security and AI governance Policymakers and regulatory authorities National defense and critical-infrastructure planners Strategic think tanks and digital sovereignty institutions Author Dr. B. Mazumdar, D.Sc. (Hon.), D.Litt. (Hon.)Independent Researcher–Scholar Domains:AI Governance • Cybersecurity • Post-Quantum Cryptography • Digital Statecraft ORCID: https://orcid.org/0009-0007-5615-3558 Release Date: December 2025Edition: Final Ultimate Complete Edition — v1.0++License: Creative Commons Attribution–NonCommercial–NoDerivatives 4.0 International (CC BY-NC-ND 4.0) © 2025 Dr. B. Mazumdar. All rights reserved.</description>
      <pubDate>Tue, 16 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7115697043</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>crisis_governance</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>A Unified Framework for Soft Robotics (2025–2040): Materials, Continuum Modeling, AI-Driven Control, Validation, and Governance</title>
      <link>https://doi.org/10.5281/zenodo.17038549</link>
      <description> Final Zenodo Description (Ultimate Edition — Clean Canonical Release) Title:A Unified Framework for Soft Robotics (2025–2040): Modeling, AI-Driven Control, Fabrication, Validation, and Governance Author:Dr. B. MazumdarIndependent Researcher–ScholarORCID: 0009-0007-5615-3558 DOI:10.5281/zenodo.17038549 Description This work presents A Unified Framework for Soft Robotics (2025–2040) as a canonical, lifecycle-oriented reference architecture integrating materials, continuum modeling, AI-driven control, fabrication, validation, and governance into a single coherent system. Soft robotic systems operate through intrinsic compliance, continuous deformation, and distributed actuation, enabling safe and adaptive interaction in medical, industrial, environmental, and human-centered domains. Despite rapid advances, the field remains structurally fragmented across disciplines, limiting reproducibility, scalability, regulatory trust, and long-term deployment. This framework addresses that fragmentation by treating soft robotics as a socio-technical system, in which physical intelligence, computational intelligence, validation rigor, and governance constraints are mutually interdependent design elements rather than isolated concerns. Key contributions include: A unified architectural model linking materials, continuum mechanics, uncertainty-aware modeling, hybrid physics–AI control, and digital twins Explicit integration of uncertainty quantification, reliability analysis, fatigue modeling, and lifetime assessment as first-class design variables Structured fabrication and scalability analysis connecting laboratory prototypes to industrial deployment through cost–yield metrics and technology readiness levels A reproducibility-centric validation pipeline grounded in statistical rigor, open-science principles, and auditability Embedded ethics, governance, environmental accountability, and regulatory alignment treated as intrinsic system constraints rather than post hoc considerations Artificial intelligence is positioned as an enabling but bounded component, formalized through hybrid control architectures emphasizing stability, interpretability, human oversight, and safety. The framework does not propose a specific device, algorithm, or material system; instead, it establishes a reference structure for designing, evaluating, and governing soft robotic systems across their full lifecycle. The scope of the framework is explicitly bounded. It does not claim universal optimality, automatic regulatory approval, or unrestricted autonomy. Validity is conditional on stated assumptions regarding continuum behavior, bounded uncertainty, and governed deployment contexts. This Ultimate Edition is released as a stable canonical reference (Version 1.0), intended to support researchers, engineers, policymakers, regulators, standards bodies, and auditors. Future extensions may expand annexures and case-specific instantiations while preserving the core architectural assumptions. This Zenodo record constitutes the authoritative archival release of the framework. Keywords Soft Robotics; Continuum Mechanics; Compliant Robotic Systems; AI-Driven Control; Hybrid Physics–AI Systems; Digital Twins; Uncertainty Quantification; Reliability Engineering; Lifecycle Design; Experimental Validation; Reproducibility; Fabrication and Scalability; Governance and Regulation; Ethics-by-Design; Sustainability; Human-Centered Robotics</description>
      <pubDate>Sun, 21 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7116699892</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>other</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Dynamic Global Alignment Model (DGAM–V2): A Causal, Multi-Agent Decision-Intelligence Architecture for Sovereign Strategic Statecraft in a Multipolar World Final Ultimate Edition (2025)</title>
      <link>https://doi.org/10.5281/zenodo.18068873</link>
      <description> Canonical Zenodo Record Description Final Ultimate Edition (2025) Dynamic Global Alignment Model (DGAM–V2): A Causal, Multi-Agent Decision-Intelligence Architecture for Sovereign Strategic Statecraft in a Multipolar World This Zenodo record archives the authoritative Final Ultimate Edition (2025) of the Dynamic Global Alignment Model (DGAM–V2) as a canonical, cryptographically verifiable scholarly artifact.The work is published under a persistent DOI and is intended for long-term archival, citation, institutional reference, and rigorous peer review. DGAM–V2 is released as a single unified research object, composed of two tightly coupled volumes that are cryptographically bound to ensure immutability, auditability, and citation safety.  Repository Contents Root-1: Main Research Volume File: DGAM-V2_Final-Ultimate-Edition_Mazumdar-2025.pdf The Main Research Volume presents the core conceptual, mathematical, and strategic architecture of DGAM–V2, including: Formal model structure and theoretical foundations Causal, risk-aware decision-intelligence design Strategic reasoning under multipolar geopolitical uncertainty Explicit governance constraints, ethical boundaries, and sovereignty safeguards Structured cross-references to all formal proofs, algorithms, and validation material contained in the Annex Compendium This volume is written for academic researchers, doctoral scholars, senior policy analysts, think-tank directors, ministers, and journal reviewers. Root-2: Annex Compendium (Technical &amp; Audit Authority) File: DGAM-V2_Annex-Compendium_Final-Ultimate-Edition_Mazumdar_2025.pdf The Annex Compendium serves as the definitive technical, mathematical, and governance reference, containing: Complete mathematical proofs and formal derivations Full Python reference implementations Reproducibility protocols and documented boundary conditions Audit-grade governance logic, security isolation, and deployment constraints Validation experiments, stress testing, and explicit limitation analysis This volume is intended for reviewers, auditors, engineers, security analysts, and advanced researchers.  Framework Overview DGAM–V2 is a research-grade decision-intelligence and governance architecture for analyzing strategic state alignment and policy choice under geopolitical uncertainty in a multipolar world. The framework integrates: Multi-Agent Reinforcement Learning (MARL) for strategic multi-actor interaction Risk-sensitive optimization, including GeoVaR and CVaR Structural Causal Models (SCM) with intervention and counterfactual reasoning Tail-risk and systemic-shock analysis under non-stationary and regime-switching dynamics Governance-by-design, including human-in-the-loop control, auditability, security isolation, and version integrity DGAM–V2 reframes alignment not as a static bloc-membership or correlational forecasting problem, but as a dynamic, causal, and risk-bounded decision process.  Cryptographic Integrity &amp; Immutability The two volumes are cryptographically bound using a Merkle construction to ensure immutability, citation safety, and long-term archival integrity. Merkle Root (SHA-256):9b6764b538bb8872a6cc18debc3ab92e5b96fde0cee75e0f8426ef455b489fcd This Merkle root irreversibly binds the Main Research Volume and the Annex Compendium into one immutable scholarly object.Any modification invalidates the root. ⚖️ Scope, Boundaries &amp; Ethical Posture DGAM–V2 is not: a predictive oracle an autonomous decision-making system an operational command, escalation, or targeting tool It is designed strictly as a decision-support and analytical architecture, preserving: sovereign human authority legal accountability democratic and institutional oversight All claims are explicitly bounded, auditable, and reproducible within documented assumptions, data limits, and governance constraints.  Intended Use Doctoral and post-doctoral research Flagship think-tank and strategic-studies programs Government and national-security decision support (non-operational) AI governance, auditability, and sovereign deployment research Long-term archival reference and peer review  Author Dr. B. MazumdarIndependent Researcher–Scholar AI Governance • Cybersecurity • Post-Quantum Cryptography • Digital Statecraft ORCID: https://orcid.org/0009-0007-5615-3558</description>
      <pubDate>Mon, 29 Dec 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7117535551</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>policy_governance</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>HYPER-OPSEC Sovereign Cortex (HOSC) A Constitution-First, Legally Enforced Sovereign Reference Architecture for Autonomic Defence of Classified Sovereign Operational Infrastructures</title>
      <link>https://doi.org/10.5281/zenodo.18118313</link>
      <description>HYPER-OPSEC Sovereign Cortex (HOSC) Sovereign Reference Architecture (SRA) A Constitution-First, Legally Enforced Autonomic Defence Architecture for Classified Sovereign Operational Infrastructures Authoritative Canonical Release — Final Ultimate Edition (Version 2, Clean Release) DOI: 10.5281/zenodo.18168959ORCID: https://orcid.org/0009-0007-5615-3558 Author:Dr. B. MazumdarIndependent Researcher–ScholarAI Governance • Cybersecurity • Post-Quantum Cryptography (PQC) • Digital StatecraftFounder — FAIR+D CanonIndia (2025) Abstract The HYPER-OPSEC Sovereign Cortex (HOSC) is a Sovereign Reference Architecture (SRA) designed to address systemic governance, accountability, and legitimacy failures in contemporary sovereign cyber and autonomic defence systems. HOSC establishes a constitution-first, legally enforced, defence-only autonomic architecture in which sovereign power is structurally constrained before it is exercised. Unlike conventional cybersecurity frameworks that prioritise technical resilience alone, HOSC integrates constitutional law, human-rights invariants, judicial oversight, and non-derogable ethical constraints directly into the operational logic of sovereign defence infrastructures. This Final Ultimate Edition (Version 2, Clean Release) formalises HOSC as a non-operational, non-weaponizable, classification-neutral reference architecture, suitable for adoption, adaptation, or evaluation by sovereign states, constitutional courts, defence establishments, standards authorities, and international governance bodies. Core Contributions HOSC introduces several canonical innovations, including: A Non-Derogable Red-Line Engine (NDRE) that encodes constitutional and legal prohibitions as enforceable policy-as-code constraints; A Memory-Immune Operational Ledger (MIOL) enabling post-quantum-resilient provenance, accountability, and judicial auditability; Human-in-Command and Human Liability Anchors, ensuring that legal responsibility for sovereign action remains strictly human and institutionally non-delegable; A formal boundary on automation and verification, explicitly rejecting the substitution of legal judgment with mathematical or algorithmic correctness; A One-Page Misinterpretation Shield that immunises the architecture against authoritarian misuse, emergency normalisation, mass surveillance, and executive absolutism; A Failure of Adoption and Partial Implementation Declaration, preventing selective or instrumental appropriation of the architecture. Together, these elements establish HOSC not merely as a technical model, but as a constitutionally defensible governance instrument for autonomic defence systems. Scope and Intended Use HOSC is: Defence-only and explicitly non-offensive; Non-operational, providing architectural principles rather than deployable systems; Non-weaponizable by design, structurally preventing autonomous or offensive use; Classification-neutral, enabling controlled circulation without embedded classified content. The architecture is intended for: Sovereign governments and defence institutions; Constitutional courts and judicial review bodies; Cybersecurity and critical-infrastructure regulators; Standards organisations (ISO/IEC, NIST, IEC); Researchers and policy-makers in AI governance, cybersecurity, and digital statecraft. Standards and Global Alignment HOSC is designed as a sovereign cyber-security super-layer, aligning with—but not replacing—existing global frameworks, including: NIST CSF and NIST SP 800-series; ISO/IEC 27001, 27002, ISO 22301, and IEC 62443; MITRE ATT&amp;CK threat modelling; Emerging AI governance and post-quantum cryptographic standards. It is fully compatible with international human-rights law, constitutional doctrine, and principles of state responsibility. Canonical Status This release constitutes the Final Ultimate Edition (Version 2, Clean Release) of the HYPER-OPSEC Sovereign Cortex. The structure, terminology, and governance logic defined herein are canonical and binding.No additions, deletions, or reordering are valid without an explicit version revision. This work forms part of the FAIR+D Canon and represents the authoritative lineage of the HOSC framework. Keywords Sovereign Cybersecurity; Constitution-First Architecture; Autonomic Defence; AI Governance; Human-in-Command; Judicial Oversight; Post-Quantum Cryptography; Policy-as-Code; Digital Statecraft; Misuse-Resilient Systems</description>
      <pubDate>Tue, 07 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7118001790</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>crisis_governance</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>C³–IKS Framework (Version 2): Civilizational Cognition and Contemporary Integration System – A Constitutionally Compliant, Secular, and Administratively Executable Model for Integrating Indic Civilizational Cognition into Contemporary Education and Governance</title>
      <link>https://doi.org/10.5281/zenodo.18163576</link>
      <description> FINAL ULTIMATE EDITION – CLEAN ZENODO DESCRIPTION C³–IKS Framework (Version 2) Civilizational Cognition and Contemporary Integration System The C³–IKS Framework (Civilizational Cognition and Contemporary Integration System), Version 2 – Final Complete Ultimate Edition, is a rigorously articulated, constitutionally compliant, secular, and administratively executable framework for integrating Indic civilizational cognition into contemporary education, governance, and public-policy ecosystems. This clean pre-print consolidates conceptual clarity, methodological rigor, legal and judicial safety, administrative realism, and international comparability into a single policy-grade and publication-ready reference document, suitable for academic citation, policy adoption, and pilot deployment. The framework systematically reconceptualizes Indic civilizational knowledge not as theology, ritual, metaphysics, cultural heritage, or identity-based instruction, but as civilizational cognition: a pre-theological, pre-political cognitive operating system composed of reusable ethical heuristics, principles of epistemic integrity, institutional reasoning tools, and long-term sustainability intelligence. All concepts are treated strictly as analytical and functional categories. Belief systems, worship, ritual practice, metaphysical claims, doctrinal assertions, and cultural symbolism are explicitly excluded, ensuring full compatibility with secular pedagogy, constitutional jurisprudence, and internationally accepted academic norms. Version 2 introduces a four-pillar execution architecture that enables lawful and operational integration: Civilizational Cognition (CC) – pre-institutional ethical and epistemic intelligence abstracted as analytical heuristics (e.g., systems order, epistemic integrity, role-based ethical responsibility, resource circulation, and long-term resilience); Institutional Translation (IT) – systematic conversion of cognition into constitutionally legible, secular, and jurisprudence-safe institutional forms; Contemporary Application (CA) – deployment across modern domains including education, climate policy, public administration, and AI governance; Governance &amp; Execution (GE) – authority mapping, constitutional and judicial safeguards, pedagogical protocols, pilot economics, risk-mitigation mechanisms, assessment alignment, monitoring indicators, and auditability. Methodologically, the framework is derived through:(a) abstraction of classical Indic sources exclusively for conceptual and cognitive patterns;(b) comparative analysis with Confucian ethics, Greek philosophy, Roman civic theory, and Enlightenment rationality to establish analytical equivalence;(c) a constitutional and jurisprudential scan to ensure legal compatibility; and(d) policy-design logic to guarantee administrative executability and scalability.A dedicated positioning analysis distinguishes C³–IKS from value-education, heritage-centric, or cultural-revival initiatives, establishing it instead as a policy-grade translation architecture for civilizational cognition. The framework is explicitly aligned with the Constitution of India (including Articles 21 and 51A(h)), Supreme Court jurisprudence on secular, value-based education (notably Aruna Roy v. Union of India, 2002), the National Education Policy (NEP) 2020, and UNESCO principles on education, sustainability, and global citizenship. A worked AI governance case study demonstrates how civilizational cognition can inform contemporary policy design using entirely secular regulatory language consistent with global norms of algorithmic accountability, impact assessment, and precautionary governance. Designed as a policy-adoptable, academically rigorous, and pilot-deployable reference framework, this Final Complete Ultimate Edition is fully compatible with secular democracies worldwide. No cultural assimilation, belief adoption, or identity enforcement is required. Assessment is competency-based, evidence-driven, and non-ideological. This edition establishes a world-class benchmark for the systematic translation of civilizational cognition into contemporary institutional reasoning, positioning India as a contemporary knowledge-producing civilization and offering a globally legible, constitutionally safe model for education reform, ethics curricula, sustainability policy, and emerging-technology governance.</description>
      <pubDate>Mon, 06 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7118431524</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>governance_reform</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>HYPER-OPSEC Sovereign Cortex (HOSC) A Constitution-First, Legally Enforced Sovereign Reference Architecture for Autonomic Defence of Classified Sovereign Operational Infrastructures</title>
      <link>https://doi.org/10.5281/zenodo.18168959</link>
      <description>HYPER-OPSEC Sovereign Cortex (HOSC) Sovereign Reference Architecture (SRA) A Constitution-First, Legally Enforced Autonomic Defence Architecture for Classified Sovereign Operational Infrastructures Authoritative Canonical Release — Final Ultimate Edition (Version 2, Clean Release) DOI: 10.5281/zenodo.18168959ORCID: https://orcid.org/0009-0007-5615-3558 Author:Dr. B. MazumdarIndependent Researcher–ScholarAI Governance • Cybersecurity • Post-Quantum Cryptography (PQC) • Digital StatecraftFounder — FAIR+D CanonIndia (2025) Abstract The HYPER-OPSEC Sovereign Cortex (HOSC) is a Sovereign Reference Architecture (SRA) designed to address systemic governance, accountability, and legitimacy failures in contemporary sovereign cyber and autonomic defence systems. HOSC establishes a constitution-first, legally enforced, defence-only autonomic architecture in which sovereign power is structurally constrained before it is exercised. Unlike conventional cybersecurity frameworks that prioritise technical resilience alone, HOSC integrates constitutional law, human-rights invariants, judicial oversight, and non-derogable ethical constraints directly into the operational logic of sovereign defence infrastructures. This Final Ultimate Edition (Version 2, Clean Release) formalises HOSC as a non-operational, non-weaponizable, classification-neutral reference architecture, suitable for adoption, adaptation, or evaluation by sovereign states, constitutional courts, defence establishments, standards authorities, and international governance bodies. Core Contributions HOSC introduces several canonical innovations, including: A Non-Derogable Red-Line Engine (NDRE) that encodes constitutional and legal prohibitions as enforceable policy-as-code constraints; A Memory-Immune Operational Ledger (MIOL) enabling post-quantum-resilient provenance, accountability, and judicial auditability; Human-in-Command and Human Liability Anchors, ensuring that legal responsibility for sovereign action remains strictly human and institutionally non-delegable; A formal boundary on automation and verification, explicitly rejecting the substitution of legal judgment with mathematical or algorithmic correctness; A One-Page Misinterpretation Shield that immunises the architecture against authoritarian misuse, emergency normalisation, mass surveillance, and executive absolutism; A Failure of Adoption and Partial Implementation Declaration, preventing selective or instrumental appropriation of the architecture. Together, these elements establish HOSC not merely as a technical model, but as a constitutionally defensible governance instrument for autonomic defence systems. Scope and Intended Use HOSC is: Defence-only and explicitly non-offensive; Non-operational, providing architectural principles rather than deployable systems; Non-weaponizable by design, structurally preventing autonomous or offensive use; Classification-neutral, enabling controlled circulation without embedded classified content. The architecture is intended for: Sovereign governments and defence institutions; Constitutional courts and judicial review bodies; Cybersecurity and critical-infrastructure regulators; Standards organisations (ISO/IEC, NIST, IEC); Researchers and policy-makers in AI governance, cybersecurity, and digital statecraft. Standards and Global Alignment HOSC is designed as a sovereign cyber-security super-layer, aligning with—but not replacing—existing global frameworks, including: NIST CSF and NIST SP 800-series; ISO/IEC 27001, 27002, ISO 22301, and IEC 62443; MITRE ATT&amp;CK threat modelling; Emerging AI governance and post-quantum cryptographic standards. It is fully compatible with international human-rights law, constitutional doctrine, and principles of state responsibility. Canonical Status This release constitutes the Final Ultimate Edition (Version 2, Clean Release) of the HYPER-OPSEC Sovereign Cortex. The structure, terminology, and governance logic defined herein are canonical and binding.No additions, deletions, or reordering are valid without an explicit version revision. This work forms part of the FAIR+D Canon and represents the authoritative lineage of the HOSC framework. Keywords Sovereign Cybersecurity; Constitution-First Architecture; Autonomic Defence; AI Governance; Human-in-Command; Judicial Oversight; Post-Quantum Cryptography; Policy-as-Code; Digital Statecraft; Misuse-Resilient Systems</description>
      <pubDate>Tue, 07 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7118518044</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>crisis_governance</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>C³–IKS Framework (Version 2): Civilizational Cognition and Contemporary Integration System – A Constitutionally Compliant, Secular, and Administratively Executable Model for Integrating Indic Civilizational Cognition into Contemporary Education and Governance</title>
      <link>https://doi.org/10.5281/zenodo.18155069</link>
      <description> FINAL ULTIMATE EDITION – CLEAN ZENODO DESCRIPTION C³–IKS Framework (Version 2) Civilizational Cognition and Contemporary Integration System The C³–IKS Framework (Civilizational Cognition and Contemporary Integration System), Version 2 – Final Complete Ultimate Edition, is a rigorously articulated, constitutionally compliant, secular, and administratively executable framework for integrating Indic civilizational cognition into contemporary education, governance, and public-policy ecosystems. This clean pre-print consolidates conceptual clarity, methodological rigor, legal and judicial safety, administrative realism, and international comparability into a single policy-grade and publication-ready reference document, suitable for academic citation, policy adoption, and pilot deployment. The framework systematically reconceptualizes Indic civilizational knowledge not as theology, ritual, metaphysics, cultural heritage, or identity-based instruction, but as civilizational cognition: a pre-theological, pre-political cognitive operating system composed of reusable ethical heuristics, principles of epistemic integrity, institutional reasoning tools, and long-term sustainability intelligence. All concepts are treated strictly as analytical and functional categories. Belief systems, worship, ritual practice, metaphysical claims, doctrinal assertions, and cultural symbolism are explicitly excluded, ensuring full compatibility with secular pedagogy, constitutional jurisprudence, and internationally accepted academic norms. Version 2 introduces a four-pillar execution architecture that enables lawful and operational integration: Civilizational Cognition (CC) – pre-institutional ethical and epistemic intelligence abstracted as analytical heuristics (e.g., systems order, epistemic integrity, role-based ethical responsibility, resource circulation, and long-term resilience); Institutional Translation (IT) – systematic conversion of cognition into constitutionally legible, secular, and jurisprudence-safe institutional forms; Contemporary Application (CA) – deployment across modern domains including education, climate policy, public administration, and AI governance; Governance &amp; Execution (GE) – authority mapping, constitutional and judicial safeguards, pedagogical protocols, pilot economics, risk-mitigation mechanisms, assessment alignment, monitoring indicators, and auditability. Methodologically, the framework is derived through:(a) abstraction of classical Indic sources exclusively for conceptual and cognitive patterns;(b) comparative analysis with Confucian ethics, Greek philosophy, Roman civic theory, and Enlightenment rationality to establish analytical equivalence;(c) a constitutional and jurisprudential scan to ensure legal compatibility; and(d) policy-design logic to guarantee administrative executability and scalability.A dedicated positioning analysis distinguishes C³–IKS from value-education, heritage-centric, or cultural-revival initiatives, establishing it instead as a policy-grade translation architecture for civilizational cognition. The framework is explicitly aligned with the Constitution of India (including Articles 21 and 51A(h)), Supreme Court jurisprudence on secular, value-based education (notably Aruna Roy v. Union of India, 2002), the National Education Policy (NEP) 2020, and UNESCO principles on education, sustainability, and global citizenship. A worked AI governance case study demonstrates how civilizational cognition can inform contemporary policy design using entirely secular regulatory language consistent with global norms of algorithmic accountability, impact assessment, and precautionary governance. Designed as a policy-adoptable, academically rigorous, and pilot-deployable reference framework, this Final Complete Ultimate Edition is fully compatible with secular democracies worldwide. No cultural assimilation, belief adoption, or identity enforcement is required. Assessment is competency-based, evidence-driven, and non-ideological. This edition establishes a world-class benchmark for the systematic translation of civilizational cognition into contemporary institutional reasoning, positioning India as a contemporary knowledge-producing civilization and offering a globally legible, constitutionally safe model for education reform, ethics curricula, sustainability policy, and emerging-technology governance.</description>
      <pubDate>Mon, 06 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7118717399</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>governance_reform</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>C³–IKS Framework: Civilizational Cognition and Contemporary Integration System A Constitutionally Compliant, Secular, and Administratively Implementable Framework for Integrating Indic Civilizational Cognition into Contemporary Education</title>
      <link>https://doi.org/10.5281/zenodo.18155070</link>
      <description>C³–IKS Framework: Civilizational Cognition and Contemporary Integration System A Constitutionally Compliant, Secular, and Administratively Implementable Policy–Academic FrameworkFinal • Complete • Ultimate Edition DOI: 10.5281/zenodo.18155070Author: Dr. B. MazumdarORCID: https://orcid.org/0009-0007-5615-3558Affiliation: Independent Interdisciplinary Researcher–Scholar(AI Governance, Cybersecurity, Post-Quantum Cryptography, Digital Statecraft)Founder: FAIR+D Canon (2025)Language: EnglishDocument Type: Final Policy–Academic Framework Description The C³–IKS Framework (Civilizational Cognition and Contemporary Integration System) is a rigorously developed, constitutionally compliant, secular, and administratively implementable policy–academic framework for integrating Indic civilizational knowledge into contemporary education systems. Moving decisively beyond theological, ritualistic, or belief-based interpretations, the framework reconceptualizes civilizational knowledge as civilizational cognition—a structured, pre-institutional cognitive operating system encompassing ethical reasoning, epistemic integrity, ecological intelligence, governance heuristics, and systems thinking. Indic civilizational constructs are treated strictly as analytical and functional models, comparable in intellectual rigor and global legibility to Greek philosophy, Confucian ethics, and Enlightenment rationality. Explicitly aligned with the Constitution of India, established judicial doctrine on secular education, and the National Education Policy (NEP) 2020, the C³–IKS Framework provides a legally safe, pedagogically neutral, and globally credible pathway for restoring indigenous knowledge systems to active and operational relevance in modern education—without violating secularism, academic integrity, or constitutional safeguards. Core Structural Innovation The framework is organized around a three-layer canonical architecture ensuring conceptual clarity, legal defensibility, and administrative usability: Layer I: Civilizational Cognition (CC) Pre-institutional cognitive principles treated as analytical heuristics, not metaphysical claims: Ṛta — systems order, balance, and sustainability Satya — epistemic integrity and truth alignment Dharma — role-based ethical responsibility Yajña — resource circulation and reciprocity Tapas — long-term discipline and resilience Layer II: Institutional Translation (IT) Constitutionally legible institutional forms: Sabha–Samiti → participatory governance mechanisms Guru–Śiṣya → mentorship-based education systems Varṇa (functional) → skill-based functional differentiation Āśrama → life-cycle planning and civic roles Layer III: Contemporary Application (CA) Operational deployment across modern domains: Climate policy → circular economy and sustainability AI ethics → algorithmic balance, restraint, and accountability Public administration → role-based accountability Education → critical inquiry and systems thinking Constitutional, Judicial, and Policy Compatibility The framework is explicitly aligned with: Article 21 (holistic human development) Article 51A(h) (scientific temper and inquiry) Judicial precedents permitting secular, value-based education NEP 2020 priorities including IKS, multidisciplinarity, and competency-based learning No component requires belief, worship, ritual practice, or doctrinal adherence. All constructs are presented exclusively as analytical, civic, and educational models. Pedagogical and Administrative Implementability Designed for direct administrative usability, the framework enables: Transversal curricular embedding across History, Civics, Ethics, Environment, and Education Supplementary cognitive readers Teacher orientation and capacity-building modules Case-based, competency-focused assessment units Pilot-ready adoption at district, SCERT, and board levels This ensures scalability, institutional continuity, and minimal ideological resistance. International Relevance The C³–IKS Framework positions India as a knowledge-producing civilization, not merely a cultural subject. Its analytical structure and secular framing render it globally legible and suitable for international academic citation, comparative civilizational studies, and policy discourse. Declaration of Secular and Academic Intent This work is presented solely as an original academic–policy contribution. All interpretations are non-theological, non-sectarian, and non-doctrinal, intended exclusively for educational, civic, and institutional development. The framework fully complies with constitutional secularism, academic integrity, and judicial standards. Keywords Indian Knowledge Systems (IKS); Civilizational Cognition; Education Reform; NEP 2020; Secular Pedagogy; Systems Ethics; Policy Framework; Governance and Education</description>
      <pubDate>Sun, 05 Jan 2025 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7119052048</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>governance_reform</category>
      <category>methodology_theory</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Hyperautomation Architectures for Financial Workflow Transformation: Integrating Generative Artificial Intelligence, Process Mining, and Socio-Technical Systems Theory</title>
      <link>https://doi.org/10.5281/zenodo.18297475</link>
      <description>The accelerating convergence of hyperautomation, generative artificial intelligence, and process mining is reshaping contemporary financial workflows, redefining how organizations conceptualize efficiency, control, and strategic intelligence in digitally mediated environments. Financial operations, traditionally characterized by high volumes of rule-based transactions, regulatory intensity, and legacy system dependence, now represent a critical frontier for advanced automation paradigms that extend beyond conventional robotic process automation toward self-learning, adaptive, and context-aware systems (Panetta, 2021). This research develops an extensive theoretical and analytical examination of hyperautomation frameworks in financial workflows, grounded explicitly in the generative artificial intelligence and process mining framework articulated by Krishnan and Bhat (2025), while situating their contribution within broader debates spanning Industry 4.0, digital twins, neural analytics, and socio-technical transformation. The study adopts a qualitative, theory-building research design that synthesizes multidisciplinary literature across information systems, artificial intelligence, operations management, and organizational theory, enabling an interpretive analysis of how hyperautomation reconfigures financial process intelligence, governance mechanisms, and human-machine collaboration (Haleem et al., 2021). Rather than offering empirical measurement or computational modeling, the article emphasizes deep conceptual elaboration, tracing the historical evolution of automation from deterministic systems toward generative, probabilistic architectures capable of autonomous decision support (Park, 2018). The abstracted findings reveal that hyperautomation in financial workflows operates not merely as a technological enhancement but as an institutional re-alignment mechanism that alters accountability structures, knowledge flows, and strategic foresight capabilities (Krishnan &amp; Bhat, 2025). Results from the interpretive analysis indicate that the integration of generative AI with process mining enables continuous process discovery, anomaly interpretation, and scenario simulation, thereby expanding financial organizations’ capacity for anticipatory governance and adaptive compliance (Jacoby &amp; Usländer, 2020). However, the discussion also highlights persistent challenges, including algorithmic opacity, cognitive displacement of human expertise, and uneven diffusion across organizational clusters and labor markets (Goher et al., 2021). By critically engaging with these tensions, the article contributes an original, publication-ready synthesis that advances hyperautomation theory in financial contexts and delineates future research trajectories at the intersection of intelligent systems, organizational resilience, and digital ethics.</description>
      <pubDate>Mon, 19 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7124686412</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>digital_transformation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Hyperautomation Architectures for Financial Workflow Transformation: Integrating Generative Artificial Intelligence, Process Mining, and Socio-Technical Systems Theory</title>
      <link>https://doi.org/10.5281/zenodo.18297474</link>
      <description>The accelerating convergence of hyperautomation, generative artificial intelligence, and process mining is reshaping contemporary financial workflows, redefining how organizations conceptualize efficiency, control, and strategic intelligence in digitally mediated environments. Financial operations, traditionally characterized by high volumes of rule-based transactions, regulatory intensity, and legacy system dependence, now represent a critical frontier for advanced automation paradigms that extend beyond conventional robotic process automation toward self-learning, adaptive, and context-aware systems (Panetta, 2021). This research develops an extensive theoretical and analytical examination of hyperautomation frameworks in financial workflows, grounded explicitly in the generative artificial intelligence and process mining framework articulated by Krishnan and Bhat (2025), while situating their contribution within broader debates spanning Industry 4.0, digital twins, neural analytics, and socio-technical transformation. The study adopts a qualitative, theory-building research design that synthesizes multidisciplinary literature across information systems, artificial intelligence, operations management, and organizational theory, enabling an interpretive analysis of how hyperautomation reconfigures financial process intelligence, governance mechanisms, and human-machine collaboration (Haleem et al., 2021). Rather than offering empirical measurement or computational modeling, the article emphasizes deep conceptual elaboration, tracing the historical evolution of automation from deterministic systems toward generative, probabilistic architectures capable of autonomous decision support (Park, 2018). The abstracted findings reveal that hyperautomation in financial workflows operates not merely as a technological enhancement but as an institutional re-alignment mechanism that alters accountability structures, knowledge flows, and strategic foresight capabilities (Krishnan &amp; Bhat, 2025). Results from the interpretive analysis indicate that the integration of generative AI with process mining enables continuous process discovery, anomaly interpretation, and scenario simulation, thereby expanding financial organizations’ capacity for anticipatory governance and adaptive compliance (Jacoby &amp; Usländer, 2020). However, the discussion also highlights persistent challenges, including algorithmic opacity, cognitive displacement of human expertise, and uneven diffusion across organizational clusters and labor markets (Goher et al., 2021). By critically engaging with these tensions, the article contributes an original, publication-ready synthesis that advances hyperautomation theory in financial contexts and delineates future research trajectories at the intersection of intelligent systems, organizational resilience, and digital ethics.</description>
      <pubDate>Mon, 19 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7124709991</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>digital_transformation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Hyperautomation Architectures for Financial Workflow Transformation: Integrating Generative Artificial Intelligence, Process Mining, and Socio-Technical Systems Theory</title>
      <link>https://doi.org/10.5281/zenodo.18340133</link>
      <description>The accelerating convergence of hyperautomation, generative artificial intelligence, and process mining is reshaping contemporary financial workflows, redefining how organizations conceptualize efficiency, control, and strategic intelligencein digitally mediated environments. Financial operations, traditionally characterized by high volumes of rule-based transactions, regulatory intensity, and legacy system dependence, now represent a critical frontier for advanced automation paradigms that extend beyond conventional robotic process automation toward self-learning, adaptive, and context-aware systems (Panetta, 2021). This research develops an extensive theoretical and analytical examination of hyperautomation frameworks in financial workflows, grounded explicitly in the generative artificial intelligence and process mining framework articulated by Krishnan and Bhat (2025), while situating their contribution within broader debates spanning Industry 4.0, digital twins, neural analytics, and socio-technical transformation.The study adopts a qualitative, theory-building research design that synthesizes multidisciplinary literature across information systems, artificial intelligence, operations management, and organizational theory, enabling an interpretive analysis of how hyperautomation reconfigures financial process intelligence, governance mechanisms, and human-machine collaboration (Haleem et al., 2021). Rather than offering empirical measurement or computational modeling, the article emphasizes deep conceptual elaboration, tracing the historical evolution of automation from deterministic systems toward generative, probabilistic architectures capable of autonomous decision support (Park, 2018). The abstracted findings reveal that hyperautomationin financial workflows operates not merely as a technological enhancement but as an institutional re-alignment mechanism that alters accountability structures, knowledge flows, and strategic foresight capabilities (Krishnan &amp; Bhat, 2025).Results from theinterpretive analysis indicate that the integration of generative AI with process mining enables continuous process discovery, anomaly interpretation, and scenario simulation, thereby expanding financial organizations’ capacity for anticipatory governanceand adaptive compliance (Jacoby &amp; Usländer, 2020). However, the discussion also highlights persistent challenges, including algorithmic opacity, cognitive displacement of human expertise, and uneven diffusion across organizational clusters and labor markets (Goher et al., 2021). By critically engaging with these tensions, the article contributes an original, publication-ready synthesis that advances hyperautomation theory in financial contexts and delineates future research trajectories at the intersectionof intelligent systems, organizational resilience, and digital ethics.</description>
      <pubDate>Thu, 22 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7125419723</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>digital_transformation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Hyperautomation Architectures for Financial Workflow Transformation: Integrating Generative Artificial Intelligence, Process Mining, and Socio-Technical Systems Theory</title>
      <link>https://doi.org/10.5281/zenodo.18340134</link>
      <description>The accelerating convergence of hyperautomation, generative artificial intelligence, and process mining is reshaping contemporary financial workflows, redefining how organizations conceptualize efficiency, control, and strategic intelligencein digitally mediated environments. Financial operations, traditionally characterized by high volumes of rule-based transactions, regulatory intensity, and legacy system dependence, now represent a critical frontier for advanced automation paradigms that extend beyond conventional robotic process automation toward self-learning, adaptive, and context-aware systems (Panetta, 2021). This research develops an extensive theoretical and analytical examination of hyperautomation frameworks in financial workflows, grounded explicitly in the generative artificial intelligence and process mining framework articulated by Krishnan and Bhat (2025), while situating their contribution within broader debates spanning Industry 4.0, digital twins, neural analytics, and socio-technical transformation.The study adopts a qualitative, theory-building research design that synthesizes multidisciplinary literature across information systems, artificial intelligence, operations management, and organizational theory, enabling an interpretive analysis of how hyperautomation reconfigures financial process intelligence, governance mechanisms, and human-machine collaboration (Haleem et al., 2021). Rather than offering empirical measurement or computational modeling, the article emphasizes deep conceptual elaboration, tracing the historical evolution of automation from deterministic systems toward generative, probabilistic architectures capable of autonomous decision support (Park, 2018). The abstracted findings reveal that hyperautomationin financial workflows operates not merely as a technological enhancement but as an institutional re-alignment mechanism that alters accountability structures, knowledge flows, and strategic foresight capabilities (Krishnan &amp; Bhat, 2025).Results from theinterpretive analysis indicate that the integration of generative AI with process mining enables continuous process discovery, anomaly interpretation, and scenario simulation, thereby expanding financial organizations’ capacity for anticipatory governanceand adaptive compliance (Jacoby &amp; Usländer, 2020). However, the discussion also highlights persistent challenges, including algorithmic opacity, cognitive displacement of human expertise, and uneven diffusion across organizational clusters and labor markets (Goher et al., 2021). By critically engaging with these tensions, the article contributes an original, publication-ready synthesis that advances hyperautomation theory in financial contexts and delineates future research trajectories at the intersectionof intelligent systems, organizational resilience, and digital ethics.</description>
      <pubDate>Thu, 22 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7125421746</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>digital_transformation</category>
      <category>dataset</category>
    </item>
    <item>
      <title>PROMETHEUS-GAIA: A PUBLIC-SAFE ARCHITECTURAL PATTERN For Hybrid Energy Ecosystems</title>
      <link>https://doi.org/10.5281/zenodo.18357788</link>
      <description>PROMETHEUS-GAIA: A PUBLIC-SAFE ARCHITECTURAL PATTERN For Hybrid Energy Ecosystems (Non-Operational • Non-Instantiable • Conceptual Only) Date: October 24, 2025 Document Type: Research Report / Architectural Pattern Definition Distribution: Public-Safe / Ethical Review Only Classification: CONCEPTUAL PATTERN (No Implementation Details) 1. Executive Summary: The Ontological Shift in Energy Governance The global transition from fossil-fuel dominance to renewable hybrid ecosystems represents not merely a technological substitution of generation sources but a fundamental ontological shift in how energy systems are conceived, governed, and secured. Traditional electrical grids, developed over the last century, operate on a paradigm of centralized optimization.1 In this legacy model, a small number of high-inertia sources—coal, nuclear, large hydro—are dispatched by a central authority to meet inelastic demand. The governing logic is one of &quot;command and control,&quot; where safety is often a secondary control loop, a set of physical breakers and relays designed to intervene only when the primary optimization algorithms fail to maintain stability. This report presents PROMETHEUS-GAIA, a radical architectural pattern that inverts this traditional relationship. In the Prometheus-Gaia paradigm, energy systems are governed by Constraint-First Autonomy.3 The system does not ask the traditional optimization question: &quot;What is the most efficient way to meet the current demand?&quot; Instead, it fundamentally reframes the operational mandate to ask: &quot;Which operational states remain within the non-negotiable envelope of public safety?&quot;.4 This inversion prioritizes the maintenance of lawful, stable, and survivable states over the maximization of throughput or economic efficiency. The architecture is hybrid and fractal, acknowledging the necessity of high-energy, centralized cores (the Prometheus Tier) to provide base-load inertia and strategic direction for cities and regions, while coupling this with a highly distributed, resilient edge (the Gaia Tier) capable of hyper-local validation and survival.5 This hybrid approach resolves the tension between the &quot;Big Grid&quot; necessity for inertia and the &quot;Microgrid&quot; necessity for resilience. Crucially, this report defines a &quot;Public-Safe&quot; system as one that possesses the Right to Stop.7 Just as a human worker on an oil rig or in a high-voltage substation has the absolute right to halt unsafe work without fear of retribution, the autonomous Gaia nodes within this architecture possess the algorithmic authority to &quot;refuse&quot; commands from the central Prometheus core if those commands violate local safety constraints. This &quot;Physics of Refusal&quot; ensures that no central error, algorithmic hallucination, or malicious optimization can cascade into a catastrophic failure at the edge. To ensure accountability in such a highly autonomous system, the pattern introduces Dual-Proof Accountability.3 Every action within the grid is bracketed by two immutable proofs: an AION (Logical Proof) generated before the action via rigorous timeline simulation, and a WORM (Physical Proof) recorded after the action via immutable logging. This ensures the system is not only theoretically safe but historically accountable, creating a bridge between the digital intent of the AI and the physical reality of the infrastructure. Note on Non-Proliferation: This document outlines a pattern, not a blueprint. Specific algorithms, material specifications, fuel cycle details, and tuning parameters are intentionally excluded to prevent the accidental or malicious instantiation of these concepts without adequate ethical review. This follows the &quot;Public-Safe&quot; documentation standard established in the SPLITWING aerial systems framework.11 2. The Philosophical Imperative 2.1 The Crisis of Optimization The prevailing dogma of modern smart grid design is optimization.4 Algorithms, whether residing in centralized SCADA systems or distributed market agents, are programmed to minimize cost, maximize throughput, or balance load within tight margins. In stable, predictable environments, optimization is a virtue; it squeezes maximum value from limited resources. However, in the volatile, adversarial, and entropy-rich environment of a 21st-century energy grid—beset by climate instability, cyber threats, and fluctuating renewable generation—unconstrained optimization becomes a critical vulnerability. An optimization algorithm is, by definition, a boundary-seeker.14 To find the &quot;global maximum&quot; of efficiency, it pushes the system state as close as possible to the physical limits of the infrastructure—maximizing line thermal limits, minimizing voltage buffers, and reducing spinning reserve to the bare minimum required by regulation. It seeks the cliff edge because the view—the mathematical efficiency—is best there. When a system operating at this theoretical limit encounters an unmodeled disturbance, it lacks the buffer to recover, leading to the brittle failures seen in recent grid collapses. In a Constraint-First system, as proposed by the Prometheus-Gaia pattern, this optimization logic is rejected as the primary governing principle.3 The primary goal of the system is not to optimize performance but to satisfy constraints.15 The safety of the public, the integrity of the infrastructure, and the stability of the frequency are treated as &quot;hard&quot; constraints—inviolable boundaries that cannot be crossed, regardless of the potential economic gain or mission urgency. 2.1.1 The Definition of &quot;Public-Safe&quot; In the context of this architectural pattern, &quot;Public-Safe&quot; is a rigorous engineering definition, not a marketing term or a vague aspiration. A Public-Safe energy system is defined by three axioms derived from the Collective framework and the Metabolic X3 principles 3: Deterministic Safety: The system must remain within a pre-defined &quot;safe set&quot; of states. If the system approaches the boundary of this set, safety protocols—specifically Control Barrier Functions (CBFs)—must intervene deterministically, overriding any optimization or mission logic.4 The safety layer is not a monitor; it is a governor. The Right to Stop: The system must possess a &quot;safe stop&quot; state that is accessible from any operating condition. In energy systems, &quot;stopping&quot; does not mean turning off the physics (which is impossible given the conservation of energy) but transitioning to a safe, self-sustained isolation mode (e.g., islanding a microgrid or shedding non-essential load).7 Auditable Intent: The system must be able to prove why it took an action (or refused one) using human-readable logic, backed by cryptographic guarantees.3 A black-box AI that keeps the lights on but cannot explain its decisions is considered unsafe in this framework. 2.2 The Hybrid Necessity: Why We Need Both Titans and Earth The debate in energy systems architecture often oscillates between two extremes: the &quot;Big Grid&quot; proponents who favor massive centralized generation (nuclear, fusion, large hydro) for its efficiency and inertia, and the &quot;Off-Grid&quot; decentralists who favor purely distributed renewable microgrids for their independence. Prometheus-Gaia argues that this binary is false and that a resilient public-safe system requires the synthesis of both. A purely distributed grid (Gaia only) lacks inertia. Without the heavy rotating mass of centralized generators (or their synthetic equivalents in large-scale storage), the grid becomes brittle, susceptible to frequency collapse from minor transient loads. It lacks the &quot;strategic&quot; energy density required for heavy industry and dense urbanization. Conversely, a purely centralized grid (Prometheus only) lacks resilience. It represents a single point of failure where a disruption at the core cascades outward, leaving the periphery helpless. It is efficient but fragile.1 The Prometheus-Gaia pattern proposes a hybrid synthesis: Prometheus (The Titan): A centralized, high-inertia core responsible for &quot;Mission&quot; energy—powering cities, industry, and regional transport. It provides the frequency anchor and manages the strategic allocation of resources over long time horizons. Gaia (The Earth): A distributed, low-inertia periphery responsible for &quot;Survival&quot; energy—powering homes, hospitals, and critical life support. It validates the core&apos;s stability and provides the resilience to survive its failure through islanding and self-sufficiency. 3. The Architectural Core 3.1 Prometheus: The Centralized Core (Tier 1) The Prometheus layer represents the high-energy, centralized components of the ecosystem. In a realized system, this would encompass utility-scale fusion reactors, large hydroelectric dams, or gigawatt-scale solar farms. The name &quot;Prometheus&quot; is invoked deliberately to represent the &quot;Provider of Fire&quot;—the source of immense, potentially dangerous, but necessary energy that drives civilization.20 3.1.1 Role and Responsibility Prometheus is responsible for the heavy lifting of the energy grid. Its primary roles are Generation and Transmission. High Inertia: It provides the frequency reference (50Hz/60Hz) that stabilizes the entire grid. This physical inertia dampens the noise of millions of switching events at the edge. Strategic Intent: It operates on long time horizons (hours to days), optimizing for regional demand forecasts, weather patterns, and economic models. It is the planner of the system.22 Authority: In the classical sense, it &quot;commands&quot; the flow of power. However, under this pattern, its commands are advisory to the lower layers. It says, &quot;I am sending 500MW to Sector 7,&quot; not &quot;Sector 7 must accept 500MW.&quot; 3.1.2 The &quot;Mission&quot; Layer Prometheus operates primarily at the Mission Layer of the governance hierarchy. It is concerned with the goals of the system: keeping the lights on, charging the electric vehicle fleet, powering the factories. It uses optimization algorithms (Linear</description>
      <pubDate>Sat, 24 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7125597535</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>crisis_governance</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>PROMETHEUS-GAIA: A PUBLIC-SAFE ARCHITECTURAL PATTERN For Hybrid Energy Ecosystems</title>
      <link>https://doi.org/10.5281/zenodo.18357787</link>
      <description>PROMETHEUS-GAIA: A PUBLIC-SAFE ARCHITECTURAL PATTERN For Hybrid Energy Ecosystems (Non-Operational • Non-Instantiable • Conceptual Only) Date: October 24, 2025 Document Type: Research Report / Architectural Pattern Definition Distribution: Public-Safe / Ethical Review Only Classification: CONCEPTUAL PATTERN (No Implementation Details) 1. Executive Summary: The Ontological Shift in Energy Governance The global transition from fossil-fuel dominance to renewable hybrid ecosystems represents not merely a technological substitution of generation sources but a fundamental ontological shift in how energy systems are conceived, governed, and secured. Traditional electrical grids, developed over the last century, operate on a paradigm of centralized optimization.1 In this legacy model, a small number of high-inertia sources—coal, nuclear, large hydro—are dispatched by a central authority to meet inelastic demand. The governing logic is one of &quot;command and control,&quot; where safety is often a secondary control loop, a set of physical breakers and relays designed to intervene only when the primary optimization algorithms fail to maintain stability. This report presents PROMETHEUS-GAIA, a radical architectural pattern that inverts this traditional relationship. In the Prometheus-Gaia paradigm, energy systems are governed by Constraint-First Autonomy.3 The system does not ask the traditional optimization question: &quot;What is the most efficient way to meet the current demand?&quot; Instead, it fundamentally reframes the operational mandate to ask: &quot;Which operational states remain within the non-negotiable envelope of public safety?&quot;.4 This inversion prioritizes the maintenance of lawful, stable, and survivable states over the maximization of throughput or economic efficiency. The architecture is hybrid and fractal, acknowledging the necessity of high-energy, centralized cores (the Prometheus Tier) to provide base-load inertia and strategic direction for cities and regions, while coupling this with a highly distributed, resilient edge (the Gaia Tier) capable of hyper-local validation and survival.5 This hybrid approach resolves the tension between the &quot;Big Grid&quot; necessity for inertia and the &quot;Microgrid&quot; necessity for resilience. Crucially, this report defines a &quot;Public-Safe&quot; system as one that possesses the Right to Stop.7 Just as a human worker on an oil rig or in a high-voltage substation has the absolute right to halt unsafe work without fear of retribution, the autonomous Gaia nodes within this architecture possess the algorithmic authority to &quot;refuse&quot; commands from the central Prometheus core if those commands violate local safety constraints. This &quot;Physics of Refusal&quot; ensures that no central error, algorithmic hallucination, or malicious optimization can cascade into a catastrophic failure at the edge. To ensure accountability in such a highly autonomous system, the pattern introduces Dual-Proof Accountability.3 Every action within the grid is bracketed by two immutable proofs: an AION (Logical Proof) generated before the action via rigorous timeline simulation, and a WORM (Physical Proof) recorded after the action via immutable logging. This ensures the system is not only theoretically safe but historically accountable, creating a bridge between the digital intent of the AI and the physical reality of the infrastructure. Note on Non-Proliferation: This document outlines a pattern, not a blueprint. Specific algorithms, material specifications, fuel cycle details, and tuning parameters are intentionally excluded to prevent the accidental or malicious instantiation of these concepts without adequate ethical review. This follows the &quot;Public-Safe&quot; documentation standard established in the SPLITWING aerial systems framework.11 2. The Philosophical Imperative 2.1 The Crisis of Optimization The prevailing dogma of modern smart grid design is optimization.4 Algorithms, whether residing in centralized SCADA systems or distributed market agents, are programmed to minimize cost, maximize throughput, or balance load within tight margins. In stable, predictable environments, optimization is a virtue; it squeezes maximum value from limited resources. However, in the volatile, adversarial, and entropy-rich environment of a 21st-century energy grid—beset by climate instability, cyber threats, and fluctuating renewable generation—unconstrained optimization becomes a critical vulnerability. An optimization algorithm is, by definition, a boundary-seeker.14 To find the &quot;global maximum&quot; of efficiency, it pushes the system state as close as possible to the physical limits of the infrastructure—maximizing line thermal limits, minimizing voltage buffers, and reducing spinning reserve to the bare minimum required by regulation. It seeks the cliff edge because the view—the mathematical efficiency—is best there. When a system operating at this theoretical limit encounters an unmodeled disturbance, it lacks the buffer to recover, leading to the brittle failures seen in recent grid collapses. In a Constraint-First system, as proposed by the Prometheus-Gaia pattern, this optimization logic is rejected as the primary governing principle.3 The primary goal of the system is not to optimize performance but to satisfy constraints.15 The safety of the public, the integrity of the infrastructure, and the stability of the frequency are treated as &quot;hard&quot; constraints—inviolable boundaries that cannot be crossed, regardless of the potential economic gain or mission urgency. 2.1.1 The Definition of &quot;Public-Safe&quot; In the context of this architectural pattern, &quot;Public-Safe&quot; is a rigorous engineering definition, not a marketing term or a vague aspiration. A Public-Safe energy system is defined by three axioms derived from the Collective framework and the Metabolic X3 principles 3: Deterministic Safety: The system must remain within a pre-defined &quot;safe set&quot; of states. If the system approaches the boundary of this set, safety protocols—specifically Control Barrier Functions (CBFs)—must intervene deterministically, overriding any optimization or mission logic.4 The safety layer is not a monitor; it is a governor. The Right to Stop: The system must possess a &quot;safe stop&quot; state that is accessible from any operating condition. In energy systems, &quot;stopping&quot; does not mean turning off the physics (which is impossible given the conservation of energy) but transitioning to a safe, self-sustained isolation mode (e.g., islanding a microgrid or shedding non-essential load).7 Auditable Intent: The system must be able to prove why it took an action (or refused one) using human-readable logic, backed by cryptographic guarantees.3 A black-box AI that keeps the lights on but cannot explain its decisions is considered unsafe in this framework. 2.2 The Hybrid Necessity: Why We Need Both Titans and Earth The debate in energy systems architecture often oscillates between two extremes: the &quot;Big Grid&quot; proponents who favor massive centralized generation (nuclear, fusion, large hydro) for its efficiency and inertia, and the &quot;Off-Grid&quot; decentralists who favor purely distributed renewable microgrids for their independence. Prometheus-Gaia argues that this binary is false and that a resilient public-safe system requires the synthesis of both. A purely distributed grid (Gaia only) lacks inertia. Without the heavy rotating mass of centralized generators (or their synthetic equivalents in large-scale storage), the grid becomes brittle, susceptible to frequency collapse from minor transient loads. It lacks the &quot;strategic&quot; energy density required for heavy industry and dense urbanization. Conversely, a purely centralized grid (Prometheus only) lacks resilience. It represents a single point of failure where a disruption at the core cascades outward, leaving the periphery helpless. It is efficient but fragile.1 The Prometheus-Gaia pattern proposes a hybrid synthesis: Prometheus (The Titan): A centralized, high-inertia core responsible for &quot;Mission&quot; energy—powering cities, industry, and regional transport. It provides the frequency anchor and manages the strategic allocation of resources over long time horizons. Gaia (The Earth): A distributed, low-inertia periphery responsible for &quot;Survival&quot; energy—powering homes, hospitals, and critical life support. It validates the core&apos;s stability and provides the resilience to survive its failure through islanding and self-sufficiency. 3. The Architectural Core 3.1 Prometheus: The Centralized Core (Tier 1) The Prometheus layer represents the high-energy, centralized components of the ecosystem. In a realized system, this would encompass utility-scale fusion reactors, large hydroelectric dams, or gigawatt-scale solar farms. The name &quot;Prometheus&quot; is invoked deliberately to represent the &quot;Provider of Fire&quot;—the source of immense, potentially dangerous, but necessary energy that drives civilization.20 3.1.1 Role and Responsibility Prometheus is responsible for the heavy lifting of the energy grid. Its primary roles are Generation and Transmission. High Inertia: It provides the frequency reference (50Hz/60Hz) that stabilizes the entire grid. This physical inertia dampens the noise of millions of switching events at the edge. Strategic Intent: It operates on long time horizons (hours to days), optimizing for regional demand forecasts, weather patterns, and economic models. It is the planner of the system.22 Authority: In the classical sense, it &quot;commands&quot; the flow of power. However, under this pattern, its commands are advisory to the lower layers. It says, &quot;I am sending 500MW to Sector 7,&quot; not &quot;Sector 7 must accept 500MW.&quot; 3.1.2 The &quot;Mission&quot; Layer Prometheus operates primarily at the Mission Layer of the governance hierarchy. It is concerned with the goals of the system: keeping the lights on, charging the electric vehicle fleet, powering the factories. It uses optimization algorithms (Linear</description>
      <pubDate>Sat, 24 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7125607798</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>crisis_governance</category>
      <category>other</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Supplementary materials for: &quot;Trust is the First Algorithm&quot;: A Socio-Technical Framework for Generative AI Integration in a High-Stakes, Resource-Constrained Context</title>
      <link>https://doi.org/10.6084/m9.figshare.31235962.v1</link>
      <description>Research OverviewThis repository contains the complete supplementary materials for a groundbreaking qualitative study that develops a novel socio-technical framework for understanding trust in generative artificial intelligence (GenAI) within high-stakes, resource-constrained healthcare environments. Using Palestinian nursing as a critical case study, this research reveals how trust functions not as a passive outcome but as an active, foundational precondition that must be deliberately constructed before AI integration can succeed.Study Design and MethodologyResearch Design: Qualitative descriptive study employing a critical case study approachSample: 25 registered nurses purposively sampled from diverse Palestinian healthcare settings (public hospitals, private clinics, primary care centers)Data Collection: Semi-structured interviews conducted in Arabic (May-June 2024), each lasting 45-75 minutesAnalysis: Reflexive thematic analysis (Braun &amp; Clarke, 2006, 2019) conducted using NVivo 12Ethical Approval: Institutional Review Board of Nablus University (Reference: Nrs. May 2024/7)Theoretical ContributionThis study makes three significant theoretical contributions:The &quot;Algorithm of Trust&quot; Framework: Proposes trust as a sequenced set of socio-technical prerequisites rather than a downstream variable in technology acceptance modelsContextual Intelligence Concept: Identifies the critical clash between algorithmic systems and local contextual knowledge in resource-constrained settingsProfessional Autonomy Preservation: Documents nurses&apos; insistence on an &quot;override mandate&quot; as essential for maintaining professional jurisdiction and accountabilityKey FindingsAnalysis revealed four interconnected themes that constitute the &quot;Algorithm of Trust&quot;:The Primacy of Explainable Trust: Transparency, local validation, and professional override capability as non-negotiable requirementsAI as a Double-Edged Sword: A nuanced risk-reward calculus balancing efficiency benefits against accountability paradoxesContextual Intelligence vs. Algorithmic Ignorance: The necessity for AI systems to understand local infrastructure, cultural norms, and resource constraintsA Mandate for Co-Design: Essential requirements for training, participatory development, and ethical governanceSignificance and ImplicationsThis research provides:Practical Guidance: Clear implementation pathways for AI integration in challenging environmentsPolicy Recommendations: Framework for ethical governance and participatory design in global health technologyTheoretical Advancement: Extends Science and Technology Studies (STS) and sociology of professions literature to AI contextsGlobal Relevance: Universal principles applicable to any setting characterized by high stakes, professional expertise, and systemic vulnerabilityMaterials IncludedInterview Guide: Comprehensive 32-question protocol exploring awareness, trust factors, benefits/risks, and contextual considerationsDe-identified Dataset: Complete demographic and thematic coding data for 25 participantsData Dictionary: Detailed documentation of variables, coding procedures, and quality assurance measuresAnalysis Scripts: Both R and Python scripts for statistical analysis, visualization, and reproducibilityMethodological Documentation: Full transparency in analysis procedures and ethical considerationsMethodological RigorCredibility Strategies: Prolonged engagement, peer debriefing, member checking (5 participants), triangulation, reflexivity, audit trailReporting Standards: COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist adherenceAnalytical Transparency: Complete documentation of coding decisions and theme development processesContextual ImportanceThis study is situated within the Palestinian healthcare system—a setting characterized by profound resource constraints, political instability, and fragmented service delivery. This context serves as a &quot;critical case&quot; (Flyvbjerg, 2006) that illuminates fundamental trust prerequisites that might remain obscured in more resourced environments but are essential for equitable global technology integration.Access and UsagePublic Materials: Interview guide, data dictionary, analysis scripts, and de-identified datasetRestricted Materials: Interview transcripts available upon reasonable request with ethical approvalsCitation Requirement: Appropriate attribution to original authors requiredLicense: CC-BY 4.0 for academic and research useResearch TeamPrincipal Investigator: Ibrahim Aqtam (Palestinian nurse with clinical experience, providing insider perspective)Co-Author: Mustafa Shouli (Qualitative health researcher providing analytical validation)Institutional Affiliation: Ibn Sina College for Health Professions, Nablus University for Vocational and Technical EducationCitationAqtam, I., &amp; Shouli, M. (2024). &quot;Trust is the First Algorithm&quot;: A Socio-Technical Framework for Generative AI Integration in a High-Stakes, Resource-Constrained Context. [Journal name pending publication]. Supplementary materials available at: [Figshare DOI/LINK]Keywords for DiscoveryArtificial Intelligence Trust, Healthcare Technology Adoption, Resource-Constrained Settings, Palestinian Healthcare, Nursing Professional Autonomy, Qualitative Health Research, Socio-technical Systems, Generative AI Ethics, Implementation Science, Critical Case Study Methodology, Reflexive Thematic Analysis, Participatory Design, Algorithmic Accountability, Cultural Competence in AI, Global Health Technology Equity</description>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7127118514</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>digital_governance</category>
      <category>trust_legitimacy</category>
      <category>dataset</category>
    </item>
    <item>
      <title>Supplementary materials for: &quot;Trust is the First Algorithm&quot;: A Socio-Technical Framework for Generative AI Integration in a High-Stakes, Resource-Constrained Context</title>
      <link>https://doi.org/10.6084/m9.figshare.31235962</link>
      <description>Research OverviewThis repository contains the complete supplementary materials for a groundbreaking qualitative study that develops a novel socio-technical framework for understanding trust in generative artificial intelligence (GenAI) within high-stakes, resource-constrained healthcare environments. Using Palestinian nursing as a critical case study, this research reveals how trust functions not as a passive outcome but as an active, foundational precondition that must be deliberately constructed before AI integration can succeed.Study Design and MethodologyResearch Design: Qualitative descriptive study employing a critical case study approachSample: 25 registered nurses purposively sampled from diverse Palestinian healthcare settings (public hospitals, private clinics, primary care centers)Data Collection: Semi-structured interviews conducted in Arabic (May-June 2024), each lasting 45-75 minutesAnalysis: Reflexive thematic analysis (Braun &amp; Clarke, 2006, 2019) conducted using NVivo 12Ethical Approval: Institutional Review Board of Nablus University (Reference: Nrs. May 2024/7)Theoretical ContributionThis study makes three significant theoretical contributions:The &quot;Algorithm of Trust&quot; Framework: Proposes trust as a sequenced set of socio-technical prerequisites rather than a downstream variable in technology acceptance modelsContextual Intelligence Concept: Identifies the critical clash between algorithmic systems and local contextual knowledge in resource-constrained settingsProfessional Autonomy Preservation: Documents nurses&apos; insistence on an &quot;override mandate&quot; as essential for maintaining professional jurisdiction and accountabilityKey FindingsAnalysis revealed four interconnected themes that constitute the &quot;Algorithm of Trust&quot;:The Primacy of Explainable Trust: Transparency, local validation, and professional override capability as non-negotiable requirementsAI as a Double-Edged Sword: A nuanced risk-reward calculus balancing efficiency benefits against accountability paradoxesContextual Intelligence vs. Algorithmic Ignorance: The necessity for AI systems to understand local infrastructure, cultural norms, and resource constraintsA Mandate for Co-Design: Essential requirements for training, participatory development, and ethical governanceSignificance and ImplicationsThis research provides:Practical Guidance: Clear implementation pathways for AI integration in challenging environmentsPolicy Recommendations: Framework for ethical governance and participatory design in global health technologyTheoretical Advancement: Extends Science and Technology Studies (STS) and sociology of professions literature to AI contextsGlobal Relevance: Universal principles applicable to any setting characterized by high stakes, professional expertise, and systemic vulnerabilityMaterials IncludedInterview Guide: Comprehensive 32-question protocol exploring awareness, trust factors, benefits/risks, and contextual considerationsDe-identified Dataset: Complete demographic and thematic coding data for 25 participantsData Dictionary: Detailed documentation of variables, coding procedures, and quality assurance measuresAnalysis Scripts: Both R and Python scripts for statistical analysis, visualization, and reproducibilityMethodological Documentation: Full transparency in analysis procedures and ethical considerationsMethodological RigorCredibility Strategies: Prolonged engagement, peer debriefing, member checking (5 participants), triangulation, reflexivity, audit trailReporting Standards: COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist adherenceAnalytical Transparency: Complete documentation of coding decisions and theme development processesContextual ImportanceThis study is situated within the Palestinian healthcare system—a setting characterized by profound resource constraints, political instability, and fragmented service delivery. This context serves as a &quot;critical case&quot; (Flyvbjerg, 2006) that illuminates fundamental trust prerequisites that might remain obscured in more resourced environments but are essential for equitable global technology integration.Access and UsagePublic Materials: Interview guide, data dictionary, analysis scripts, and de-identified datasetRestricted Materials: Interview transcripts available upon reasonable request with ethical approvalsCitation Requirement: Appropriate attribution to original authors requiredLicense: CC-BY 4.0 for academic and research useResearch TeamPrincipal Investigator: Ibrahim Aqtam (Palestinian nurse with clinical experience, providing insider perspective)Co-Author: Mustafa Shouli (Qualitative health researcher providing analytical validation)Institutional Affiliation: Ibn Sina College for Health Professions, Nablus University for Vocational and Technical EducationCitationAqtam, I., &amp; Shouli, M. (2024). &quot;Trust is the First Algorithm&quot;: A Socio-Technical Framework for Generative AI Integration in a High-Stakes, Resource-Constrained Context. [Journal name pending publication]. Supplementary materials available at: [Figshare DOI/LINK]Keywords for DiscoveryArtificial Intelligence Trust, Healthcare Technology Adoption, Resource-Constrained Settings, Palestinian Healthcare, Nursing Professional Autonomy, Qualitative Health Research, Socio-technical Systems, Generative AI Ethics, Implementation Science, Critical Case Study Methodology, Reflexive Thematic Analysis, Participatory Design, Algorithmic Accountability, Cultural Competence in AI, Global Health Technology Equity</description>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7127135139</guid>
      <source url="https://public-governance.livingmeta.ai">Figshare</source>
      <category>digital_governance</category>
      <category>trust_legitimacy</category>
      <category>dataset</category>
    </item>
    <item>
      <title>From Scientific Symposium to Information Pollution: An Audit of the OSINT Evidence Chain Regarding Hawking&apos;s Visit to the U.S. Virgin Islands and the “Private Island Visit” Narrative (2005–2007)</title>
      <link>https://doi.org/10.7910/dvn/puqnh7</link>
      <description>This dataset provides an auditable evidence chain data product for intelligence and national security text analysis, upgrading traditional “interpretable but non-replicable” text interpretation to an analytical process featuring “one-to-one correspondence between conclusions, evidence, and rules, with reproducibility and accountability.” Through structured extraction and dual-algorithm coding of source materials, the dataset generates tabular outputs containing elements such as claims, evidence, coding outputs, reliability/validity metrics, and gating decisions. Conclusions are constrained by explicit gating thresholds and dispute pool rules to prevent narrative filling or over-inference when evidence is insufficient. Core features of the dataset include: Dual-coding: Generates two parallel coding outputs for the same material to measure analytical stability and assess the impact of analyst/algorithm variance on conclusions. Reliability/Validity Gating: Provides verifiable reliability and validity metrics with threshold settings, documenting rework and downgrade rules for non-compliant cases. Evidence Tiering &amp; Traceability: Each critical judgment is tied to evidence tiers and source types, enabling traceability from conclusions back to evidence and coding processes. Reproducibility &amp; Audit-ready: Data structures and field naming designed for reproducibility and independent auditing, enabling rerunning the analytical loop: “Data → Coding → Scoring → Verification → Sealing Decision.” Applicable scenarios include: OSINT/intelligence text auditing, policy text evidence alignment, narrative conflict resolution, reproducible analytical conclusions, and computable evidence chain research for governance and compliance.</description>
      <pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7127889761</guid>
      <source url="https://public-governance.livingmeta.ai">Harvard Dataverse</source>
      <category>integrity_ethics</category>
      <category>transparency_openness</category>
      <category>dataset</category>
    </item>
    <item>
      <title>&quot;Dataset for Managerial Override of Artificial Intelligence in Operations: An Accountability and Decision Escalation Model &quot;</title>
      <link>https://doi.org/10.21227/2srz-vh81</link>
      <description>&quot;Artificial intelligence is increasingly embedded in operational planning and execution, yet many organizations fail to scale value because managers frequently override AI recommendations at the point of decision. Prior work on trust in automation shows that reliance depends on vulnerability and uncertainty, not accuracy alone [1]. Behavioral research also indicates that observing algorithmic error can trigger algorithm aversion, even when algorithms outperform humans [2]. At the same time, evidence of algorithm appreciation suggests that managers may prefer algorithmic advice under certain task conditions [3]. This study develops and tests a decision escalation model that explains AI override as a governance and behavioral control outcome shaped by task criticality, outcome uncertainty, and accountability pressure. Building on escalation of commitment [4] and accountability theory [5], we hypothesize that override increases with criticality and uncertainty, and that accountability strengthens these effects even when perceived AI accuracy is high. We propose a two-study design combining operational system logs with a multi-respondent survey and estimate multilevel logistic models with robustness and endogeneity checks. The study contributes to engineering management by reframing AI governance as decision rights and escalation design rather than a technical tool issue.&quot;</description>
      <pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7128089574</guid>
      <source url="https://public-governance.livingmeta.ai">IEEE DataPort</source>
      <category>organizational_governance</category>
      <category>organizational_culture</category>
      <category>dataset</category>
    </item>
    <item>
      <title>CARE – A Governance Reference Framework for Explainable, Reviewable and Appealable Decisions</title>
      <link>https://doi.org/10.5281/zenodo.18616824</link>
      <description>CARE (Civic Accountability, Review &amp; Explainability) is an open, governance-first reference framework for AI-assisted and automated decision-making systems that affect people directly. As algorithmic and AI-driven systems increasingly influence public administration, welfare decisions, compliance, and access to rights, CARE addresses a critical structural gap: the lack of operational governance that ensures decisions remain explainable, reviewable, and appealable in practice. CARE is not a product, AI model, or software implementation. It is a technology-agnostic governance architecture that defines the conditions under which decision systems remain legitimate, accountable, and human-centred across sectors and jurisdictions. Core contributions: - Operational governance for AI-assisted and automated decision systems - A unified Explainable → Reviewable → Appealable decision chain - Explicit design for human vulnerability and low-capacity contexts - Technology-agnostic and sector-independent applicability - Public, citable reference architecture enabling reuse, scrutiny, and institutional adoption CARE is designed to complement existing regulation (e.g. AI governance, administrative law, digital rights frameworks) by translating high-level principles into practical structural requirements before, during, and after decisions are made. The framework is published openly to support transparency, responsible system design, and the protection of individuals subject to automated or AI-assisted decisions. Author / Originator: Nick Vejle Status: Public reference framework (open publication) Intended use: Governance, policy, public sector systems, responsible AI design</description>
      <pubDate>Thu, 12 Feb 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://openalex.org/W7128674272</guid>
      <source url="https://public-governance.livingmeta.ai">Zenodo</source>
      <category>digital_governance</category>
      <category>transparency_openness</category>
      <category>dataset</category>
    </item>
  </channel>
</rss>
