When Laws Become Optional: AI, Surveillance Capitalism, and the Erosion of Autonomy

A shift has occurred in how value is created in the modern world. For more than a century, oil-powered industrial civilization has fueled everything from factories to wars. However, in the 21st century, a new resource has emerged as even more valuable, and it requires neither drilling rigs nor refineries. This resource is harvested not from beneath the earth but from the surface of our daily lives.
Every time you search for information, scroll through social media, navigate with GPS, or carry your phone through the world, you generate data. Each click reveals a preference. Each pause indicates interest. Each purchase exposes susceptibility. Individually, these data points seem trivial. Collectively, they form an extraordinarily detailed map of human behavior, desire, and predictability. This is the raw material that Harvard Business School professor Shoshana Zuboff has termed "surveillance capitalism." This economic system doesn't merely observe human life but seeks to predict, influence, and ultimately control it.
What makes this system particularly powerful is that most people never gave their consent to participate in it. There was no vote, no public debate, no democratic process. The infrastructure was built around us while we clicked "I agree" on terms of service agreements that no one actually reads. We became unwitting subjects of an enormous behavioral experiment without being informed that we were part of the study.
Artificial intelligence stands at the center of this transformation, functioning as both microscope and sculptor. It examines human behavior with extraordinary precision, then shapes that behavior through carefully designed feedback loops. While private corporations pioneered these techniques to sell products and capture attention, the same tools have become irresistible to governments, especially those that view information control as essential to maintaining power.
The question we must ask is both terrifying and straightforward. What happens when the machinery designed to predict what you'll buy next is repurposed to indicate how you'll vote, what you'll believe, and whether you'll resist? And what happens when those wielding this machinery decide that laws designed to restrain their power are mere suggestions, obstacles to be ignored rather than boundaries to be respected?
Traditional capitalism operates on a straightforward principle: create something people want, sell it for more than it costs to make, and profit from the difference. A factory produces cars. Farms grow food. A craftsperson makes furniture. The exchange is visible and comprehensible. You receive a product or service. You pay money. The transaction is complete.
Surveillance capitalism operates on entirely different logic. The companies at the forefront, such as Google, Facebook, and Amazon, appear to offer services for free or products at remarkably low prices. You search without paying. You connect with friends without a subscription fee. The convenience is undeniable, and for many years, it seemed almost miraculous. How could these companies provide so much while charging so little?
The answer, of course, is that we were never the customers. We were the product. More precisely, our behavior was the product. These companies discovered that human experience itself could be converted into data, and that data could be analyzed to predict future behavior with remarkable accuracy. Once you can predict what someone will do, you can sell that prediction to others who want to influence them.
This created what Zuboff describes as a behavioral futures market. Advertisers don't simply want to know who you are; they want to see what you'll do next, what will catch your attention, what will trigger a purchase. Insurance companies want to predict your health risks. Political campaigns want to know which messages will change your vote. Employers wish to anticipate whether you'll be a productive worker or a troublesome one.
The process is elegantly circular. First, surveillance capitalism observes your behavior and builds a model of who you are. Then it tests that model by showing you content, advertisements, or suggestions. Your response to those interventions generates additional data, which further refines the model. Over time, the system doesn't just predict your behavior; it begins to shape it, creating feedback loops that guide you toward more predictable patterns.
You become easier to read and more straightforward to manipulate. Your digital environment transforms into something resembling a maze designed by behavioral psychologists, where every turn has been optimized to produce a specific result.
Artificial intelligence transformed surveillance capitalism from a system of passive observation into something far more powerful: an active engine of behavioral modification. Before machine learning algorithms became sophisticated enough to process massive datasets, companies could track what people did online, but they struggled to make sense of it all. The information was too vast, too noisy, too complex for human analysts to interpret effectively.
Modern AI changed everything. Machine learning systems can identify patterns in billions of data points that would be invisible to any human observer. They detect correlations between seemingly unrelated behaviors. They recognize that people who search for specific terms are more likely to click on particular types of content. They discover that showing someone a specific image at a specific time increases the likelihood of engagement. They learn which emotional triggers are most effective on specific psychological profiles.
These algorithms don't simply analyze what people have done; they build predictive models of what people are likely to do in the future. Given enough data about your past behavior, an AI can forecast with unsettling accuracy what you'll search for tomorrow, what products you'll consider buying, what news stories will catch your attention, and perhaps most disturbingly, what ideas you'll find persuasive.
But AI's true power lies not in prediction alone but in its ability to create self-fulfilling prophecies. The algorithms don't just forecast your behavior; they actively shape it by controlling what you see and interact with. Your social media feed isn't a neutral window into the world. It's a carefully curated sequence of content designed to maximize your engagement, which in practice means content that triggers strong emotional responses, whether anger, fear, desire, or tribal loyalty.
These systems employ reinforcement learning, the same technique used to train robots and game-playing AIs. Every time you click, scroll, pause, or share, you're providing feedback that helps the algorithm learn how to capture your attention more effectively. The system experiments constantly, testing different approaches, measuring your responses, and refining its strategy. Over thousands of interactions, it becomes remarkably skilled at knowing which buttons to push.
This represents a fundamental shift in the relationship between technology and humanity. We've created machines that study human psychology with inhuman patience and precision, then use that knowledge to guide our behavior in directions that serve institutional interests rather than our own. The digital environment becomes a behavior modification chamber, one that operates continuously and mostly invisibly, nudging us toward choices we might not have made if given neutral information and genuine autonomy.
If corporations discovered they could predict and shape human behavior to sell products more effectively, it was inevitable that governments would recognize the political applications of the same capabilities. The infrastructure of surveillance capitalism, built initially for advertising and consumer manipulation, is ideally suited for political control, electoral manipulation, and social engineering.
Governments have always sought to influence their citizens, of course. Propaganda is ancient. However, what AI-driven surveillance enables is fundamentally different in terms of scale and precision. Traditional propaganda broadcasts the same message to everyone, hoping it resonates with enough people to be effective. Modern algorithmic influence operates at the individual level, delivering personalized messages crafted to exploit each person's specific psychological vulnerabilities.
The Cambridge Analytica scandal offered the world a glimpse of how this works in practice. The company collected data from tens of millions of Facebook users, often without their knowledge or consent, and used it to build detailed psychological profiles. These profiles predicted each person's personality traits, fears, values, and susceptibilities. Armed with this information, political campaigns could craft individualized messages designed not to inform voters but to manipulate them.
Someone identified as prone to anxiety might receive messages emphasizing threats and dangers. Someone driven by moral certainty might see content that portrays issues as simple battles between good and evil. Someone who values personal freedom might encounter appeals warning of government overreach, while someone who prioritizes security might see precisely the opposite framing of the same issue. None of these individuals would see what messages others were receiving. Each would exist in a personalized information bubble, engineered to move them in a desired political direction.
This isn't persuasion in any traditional sense. Classical rhetoric involves making arguments in public, where they can be examined, challenged, and debated. Algorithmic influence operates privately, below the level of conscious awareness, exploiting cognitive biases that most people are unaware of. You can't counter an argument you never knew was being made. You can't resist manipulation when you don't realize you're being manipulated.
In democratic societies, these tools are concerning enough. In authoritarian contexts, they become instruments of comprehensive social control. Governments can extend the same predictive technologies into facial recognition networks that track citizens in public spaces, censorship algorithms that suppress dissent before it spreads, and social credit systems that reward compliance and punish deviation. China's surveillance state demonstrates how completely surveillance capitalism's tools can be repurposed from commercial optimization into political domination.
These systems promise efficiency, public safety, and social harmony. However, they fundamentally alter the relationship between citizens and the state. Instead of governing through laws that apply equally to everyone, the algorithmic state governs through personalized manipulation and prediction. It doesn't need to ban dissent if it can predict who will dissent and intervene to change their mind or isolate their influence before they act.
When artificial intelligence becomes deeply embedded in governance, something more fundamental than policy changes, the very nature of state power transforms. Political theorists and ethicists have begun describing this evolution as the emergence of the behavioral state, a form of government that continuously monitors, analyzes, and shapes the population's thoughts and actions through algorithmic systems.
This isn't the brutal authoritarianism of the 20th century, with its gulags and secret police and crude propaganda. The behavioral state operates more subtly, guiding rather than commanding, nudging rather than forcing. It creates environments where confident choices become natural, and others become difficult or invisible. It shapes the information people receive, the options they see, and the paths of least resistance they follow.
Consider how this works in practice. An algorithmic system monitoring social media can identify emerging political movements before they gain momentum. It can predict which individuals are likely to become organizers or influencers within those movements. It can then subtly adjust what content those individuals see, gradually exposing them to information designed to moderate their views, sow internal disagreement, or distract them with other concerns. The movement never experiences obvious repression because it gets channeled into harmless directions before it develops enough coherence to challenge power.
Or consider predictive policing systems that claim to identify likely criminals before crimes occur. These algorithms analyze patterns in existing arrest data, which inevitably reflect the biases and priorities of past policing. They then direct police resources toward communities and individuals flagged as high-risk, which generates more arrests in those areas, thereby confirming the algorithm's predictions. The system creates and reinforces the very patterns it claims merely to observe.
In the behavioral state, consent becomes an increasingly meaningless concept. How can you consent to manipulation you cannot perceive? How can you object to algorithmic decisions whose logic is proprietary, whose data is hidden, whose operation is too complex for any human to understand fully? You become both subject and product of a vast optimization system, one that treats human autonomy as a bug to be minimized rather than a right to be protected.
The citizen exists within an invisible architecture of behavioral modification, a system optimized not for truth, justice, or human flourishing but for control, stability, and the perpetuation of existing power structures. Your thoughts feel like your own, but the information shaping them has been carefully filtered. Your choices feel free, but the options presented to you have been algorithmically pruned. You navigate what seems like an open landscape but is actually a maze with walls you cannot see.
All of this raises a question that cuts to the heart of democratic governance: What happens when the institutions and individuals wielding these technologies decide they are no longer bound by the legal constraints designed to limit their power?
The rule of law stands as one of civilization's most important innovations. It establishes the principle that power must operate within defined boundaries, that even those who govern must answer to a higher framework of rules, and that violations carry consequences. Laws protecting privacy, requiring warrants for surveillance, guaranteeing due process, and limiting government intrusion exist precisely because history has shown what happens when power operates without restraint.
But laws are only as strong as the willingness to enforce them and the consequences for violating them. When courts are packed with judges who defer to executive authority, when oversight agencies are captured by those they're meant to monitor, when legislative bodies refuse to investigate abuses by their own party, when prosecutors decline to bring charges against the powerful, the legal framework begins to function more as theater than as genuine constraint.
We've seen this pattern before in history. Democratic institutions, designed with careful checks and balances, gradually lose their independence and authority. Violations that would have once triggered an immediate response are normalized as acceptable. Each transgression unaddressed makes the next one easier. The system remains intact on paper but becomes hollow in practice.
In the context of surveillance technology and AI-driven behavioral control, this collapse of legal restraint is particularly dangerous because violations can be invisible. Traditional abuse of power often leaves evidence. Physical surveillance requires agents who may be required to speak. Censorship creates obvious gaps in where information should be. But algorithmic manipulation can occur without anyone realizing it happened. How do you enforce laws against violations that leave no visible trace, that target victims who never know they were targeted, that operate through systems too complex for courts to understand?
When a government stops genuinely obeying laws that limit surveillance, require transparency, or protect individual rights, those laws don't immediately disappear. They remain in the books, are mentioned in speeches, and are cited in debates. But they become decorative symbols without substance. The Constitution guarantees privacy, but agencies collect vast databases of citizen communications. Laws require judicial warrants, but governments purchase data from private companies that collected it without warrants. Regulations demand transparency, but classification and proprietary secrecy make oversight impossible.
Yet even unenforced laws retain a certain power. They serve as moral anchors, a documented standard against which abuse can be measured. They provide language for resistance and reform. They offer a focal point for coalition building among those who still believe in the principle of limited government. History suggests that repressive systems often collapse not because they're defeated by force but because they become unsustainable when enough people refuse to participate in the fiction that what's happening is legitimate.
The question becomes whether society can maintain a collective memory of what the rules were supposed to be long enough to restore them eventually. Can we remember that governments aren't supposed to predict and manipulate citizen behavior? Can we recall that privacy was once considered a right rather than a commodity? Can we preserve the conviction that humans should not be treated as predictable machines to be programmed by those in power?
If formal legal constraints prove inadequate to restrain surveillance capitalism and the behavioral state, other forms of power must emerge to create balance. The first and most critical is transparency. Democratic societies have always depended on sunlight as a disinfectant, on the assumption that abuses exposed lose their power.
A free press that investigates and reports without fear serves as a crucial counterweight to hidden power. Journalists who doggedly pursue information about surveillance programs, algorithmic manipulation, and data harvesting make the invisible visible. Whistleblowers who risk their careers and freedom to reveal what's happening inside classified programs or proprietary systems perform an essential democratic function, even when the law treats them as criminals for doing so.
But information alone isn't sufficient. Citizens also need tools that make surveillance and manipulation more challenging to execute. This is where technology itself can serve as a form of resistance. Encryption systems that protect communications from interception, anonymization networks that prevent tracking, and decentralized platforms that don't concentrate data in corporate or government hands can all make it harder to build comprehensive behavioral profiles of entire populations.
These tools exist not as exotic technologies accessible only to technical specialists but increasingly as practical options available to anyone willing to learn basic digital hygiene. Using secure messaging applications, browsing through privacy-protecting networks, selecting services that don't monetize user data, and understanding how to limit digital footprints are all meaningful acts of technological self-defense.
Beyond individual actions, civil society must demand fundamental changes in how technology is designed and deployed. The current system treats surveillance and behavioral manipulation as acceptable defaults, with privacy as an optional feature. This can be reversed. Systems can be built with privacy protections embedded from the ground up rather than tacked on as afterthoughts. Algorithms can be designed for transparency, with their logic open to audit and their decision-making processes explainable. Platforms can operate as genuine utilities, serving user interests rather than maximizing engagement at the expense of social costs.
Open-source software plays a vital role here, as it enables inspection. When code is open, experts can examine whether systems do what they claim to do, whether they contain hidden surveillance mechanisms, and whether they're vulnerable to manipulation. Transparency doesn't guarantee virtue, but opacity almost guarantees abuse.
Some technologists and ethicists advocate for cryptographic verification systems that could create mathematically enforced constraints on data use. Imagine systems where personal information is encrypted in ways that allow for practical computation without exposing the underlying data, where algorithms can prove they followed specified rules without revealing proprietary details, and where consent isn't just a matter of clicking a box but is technically enforced through the architecture of the systems.
These approaches remain more aspirational than actual, but they point toward a possibility that legal rules might someday be supplemented or even replaced by technical regulations that cannot be violated without detection. The system wouldn't need to trust that institutions will behave correctly because the architecture itself would make certain abuses impossible or, at the very least, immediately visible.
Ultimately, though, technology alone cannot solve what is fundamentally a political and moral problem. The most vigorous defense against surveillance capitalism and the behavioral state lies in civic engagement by people who understand what's at stake and refuse to surrender their autonomy. This means supporting organizations that fight for privacy rights and algorithmic transparency. It means choosing, whenever possible, to do business with companies that respect user autonomy. It means voting for candidates who take these issues seriously. It means having uncomfortable conversations with friends and family about how their behavior is being manipulated.
It also means being willing to accept some inconvenience in exchange for autonomy. Surveillance capitalism flourished partly because it offers genuine conveniences. It's easier to use services that automatically know what you want. It's satisfying to see content algorithmically curated to your interests. Breaking free of these systems often means accepting less convenient alternatives, doing more work yourself, and missing some of what everyone else is seeing.
This is the trade-off that each person must navigate: how much autonomy are you willing to sacrifice for convenience and connection? There's no single correct answer, but the decision should at least be made consciously, rather than defaulted to through ignorance or learned helplessness.
Surveillance capitalism and AI-driven behavioral control represent the latest iteration of an ancient dynamic: the tension between freedom and control, between human autonomy and institutional power, between the individual and the system. Every advance in technology for monitoring and manipulating populations has been met by counter-movements demanding privacy, liberty, and self-determination.
What makes the current moment particularly challenging is that manipulation is so sophisticated and so invisible. Previous generations faced overt censorship that they could recognize and resist, obvious surveillance that they could avoid or subvert, and crude propaganda that they could detect and reject. Today's algorithmic influence operates below the threshold of conscious awareness, shaping behavior through thousands of micro-interactions that no individual can perceive or comprehend.
Yet history offers grounds for cautious optimism. Repressive systems, no matter how technologically advanced, depend ultimately on human cooperation. The surveillance apparatus requires people to build, maintain, and operate it. The algorithms need programmers to write them. Databases require administrators to manage them. The infrastructure needs citizens who accept its legitimacy or at least don't actively resist it.
When enough people within these systems begin to question what they're participating in, when enough of those with technical knowledge become whistleblowers rather than enablers, and when enough citizens refuse to normalize the abnormal systems that seemed invincible, they can crack surprisingly quickly.
This is why governments that rely on surveillance and behavioral manipulation often work so hard to isolate dissidents, to prevent the formation of collective resistance, to make opposition feel futile. They understand that their power depends on convincing people that resistance is impossible. The truth is usually the opposite: resistance is difficult, indeed, often dangerous, but rarely impossible.
The most crucial resistance may be the simplest. Refusing to accept that being surveilled and manipulated is normal, that privacy is obsolete, and that autonomy is an outdated concept in the digital age. These are all claims made by those who benefit from surveillance capitalism and the behavioral state. They're not inevitable developments but choices that can be challenged and changed.
When governments fail to obey their own laws, society's survival depends not primarily on statutes and formal institutions, but on the moral convictions of ordinary people. Laws matter enormously when they're enforced, but when they're not, culture becomes the last line of defense. Do people still believe privacy matters? Do they still value autonomy? Do they still think humans should be treated as ends in themselves rather than means to someone else's ends?
These aren't just philosophical questions but practical ones that shape how societies evolve. A population that has forgotten why privacy matters will not fight to protect it. A population that has normalized constant surveillance will not recognize totalitarianism when it arrives. A population that accepts being treated as predictable machines will not demand to be treated as free human beings.
This is why education and culture matter so profoundly. Young people growing up in a world of ubiquitous surveillance may not realize that things were ever different or could be different. They may accept as natural what previous generations would have found intolerable. Teaching history, fostering critical thinking, and preserving institutional memory of what rights and freedoms mean all serve as forms of resistance against the normalization of control.
At stake in all of this is something more fundamental than privacy in the narrow sense or even freedom from surveillance. What's being contested is the question of whether human beings can remain genuinely human in an age of artificial intelligence and total information awareness.
If our behavior can be predicted with high accuracy, are we truly making free choices or just following scripts that algorithms have already written? If our beliefs are shaped by information environments carefully engineered to produce specific outcomes, are we genuinely thinking for ourselves? If our emotions are being triggered by systems designed to maximize engagement rather than support well-being, are we living authentic lives or performing roles in someone else's optimization game?
These questions don't have simple answers, but they need to be asked and grappled with honestly. The danger isn't just that surveillance capitalism and the behavioral state violate privacy rights, though they do. The deeper danger is that they fundamentally alter what it means to be a human being living in community with other human beings.
Traditional relationships involve genuine uncertainty and genuine encounter. You don't fully know what another person will say or do. This unpredictability is part of what makes human interaction meaningful. But in a world where AI systems predict behavior with high accuracy and guide interactions toward predetermined outcomes, that element of genuine encounter diminishes. Relationships become less about authentic connection and more about performing roles the algorithm expects.
Similarly, traditional democracy depends on citizens forming opinions through exposure to diverse perspectives and reasoned debate. But when each person lives in an algorithmically customized information bubble, seeing only content that reinforces their existing beliefs or triggers their deepest fears, genuine democratic deliberation becomes nearly impossible. People aren't engaging with different views; they're being managed by systems that treat political belief as just another consumer preference to be optimized.
This represents a fundamental threat to human agency and dignity. If we accept that humans are simply predictable machines to be programmed, we've abandoned something essential about what makes us human. We've reduced consciousness to computation, freedom to the illusion of choice, and human flourishing to whatever metrics the algorithm optimizes for.
Resisting this reduction isn't about nostalgia for some imagined past or rejection of technology itself. It's about insisting that technology should serve human purposes rather than human beings serving technological systems. It's about demanding that AI and surveillance capabilities be constrained by ethics, law, and democratic accountability rather than deployed without limit by whoever has the power to do so.
The struggle against surveillance capitalism and algorithmic control will not be won through any single law, any particular technology, or any individual act of resistance. It will be won, if it is won at all, through the accumulated efforts of millions of people who refuse to accept the reduction of human life to data points in someone else's behavioral model.
This means technologists who insist on building systems that respect user autonomy even when surveillance would be more profitable. It means journalists who investigate and expose the actions of surveillance systems, even when that investigation is dangerous or inconvenient. It means citizens who educate themselves about these issues and make choices that prioritize freedom over convenience. It means lawmakers who resist the temptation to deploy against their political opponents the same surveillance tools they once criticized.
Most fundamentally, it means preserving and passing forward a cultural memory that human beings have rights that no algorithm should violate, that autonomy matters more than optimization, that some things should never be done even if they're technically possible and economically profitable.
The law matters enormously and must be strengthened wherever possible. But when laws fail or are deliberately ignored, culture becomes the ultimate firewall. Suppose enough people retain the conviction that surveillance and manipulation are wrong, that privacy and autonomy are worth protecting, that humans should not be treated as predictable machines? In that case, even sophisticated control systems become vulnerable.
The most important code isn't written in Python, C++, or any other programming language. It's written in the choices people make about what to accept and what to resist, what to build and what to refuse to build, what to normalize and what to challenge. It's written in courage, in curiosity, in the collective insistence that technology should enlarge human freedom rather than diminish it.
Ultimately, we are the firewall. Each of us, individually imperfect and limited, but collectively capable of protecting what matters most about being human in an age that increasingly treats humanity as just another resource to be extracted and optimized. The question is whether enough of us will recognize what's at stake before it's too late to resist.
BearNetAI, LLC | © 2024, 2025 All Rights Reserved
Support BearNetAI
BearNetAI exists to make AI understandable and accessible. Aside from occasional book sales, I receive no other income from this work. I’ve chosen to keep BearNetAI ad-free so we can stay independent and focused on providing thoughtful, unbiased content.
Your support helps cover website costs, content creation, and outreach. If you can’t donate right now, that’s okay. Sharing this post with your network is just as helpful.
Thank you for being part of the BearNetAI community.
Books by the Author:

Categories: AI Ethics, Governance & Society, Digital Privacy, Human Autonomy, Technology & Democracy
Glossary of AI Terms Used in this Post
Algorithmic Influence: The use of AI-driven systems to manipulate or guide human decision-making by curating personalized digital experiences.
Behavioral Futures Market: The commercial system in which predictive data about human behavior is bought, sold, or exchanged for profit.
Behavioral Governance: The use of AI to monitor and influence populations through predictive modeling and adaptive feedback mechanisms.
Behavioral State: A government or system that uses continuous data collection and AI analysis to predict, guide, or control the behavior of its citizens.
Deepfake: AI-generated synthetic media, such as videos or audio, that realistically imitate real individuals to deceive viewers.
Machine Learning: A subset of AI that enables systems to learn and improve from experience without being explicitly programmed.
Microtargeting: Delivering tailored messages to specific individuals or small groups based on AI analysis of personal data and behavioral traits.
Predictive Analytics: The use of AI algorithms to forecast future events or behaviors by analyzing patterns in large datasets.
Reinforcement Learning: A type of machine learning where AI systems learn optimal behavior through trial and reward, often used in recommendation engines.
Surveillance Capitalism: An economic system that monetizes personal data collected through digital surveillance to predict and shape human behavior.
Citations:
Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach. The Guardian.
Edelson, L., Kelley, P. G., & McDonald, A. M. (2021). Online Political Advertising Transparency: Ad Libraries and Their Use. arXiv.
Foer, F. (2017). World Without Mind: The Existential Threat of Big Tech. Penguin Press.
González-Bailón, S., & Wang, N. (2023). The Rise of AI in Political Communication: Persuasion and Polarization in the Digital Age. Oxford University Press.
Harari, Y. N. (2018). 21 Lessons for the 21st Century. Spiegel & Grau.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Tufekci, Z. (2015). Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency. Colorado Technology Law Journal.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
This post is also available as a podcast:
LinkedIn BlueskySignal: bearnetai.28