Tyranny of Smartphones

When Apple unveiled the first iPhone in January 2007, most ordinary consumers rolled their eyes — "why would I need a phone to do all that?" Before long, however, Americans were buying smartphones in droves. By 2013, smartphones had overtaken "dumb phones" in global market share for the first time; they have never looked back.

Initially, the great attraction of smartphones was their ability to combine standard call and text features with access to every corner of the digital world in one convenient, handheld device. Email, online shopping, YouTube, you name it — anything you wanted to do on the internet, you could now do from your phone. Smartphones also came with built-in cameras, obviating the need to carry around a digital camera and enabling users to seamlessly upload and share pictures with one another.

Aside from the camera feature, early smartphones did not do much to bridge the divide between the digital and analog worlds: Anything one could not do (or did not want to do) online required the use of legs, arms, and mouths, not just thumbs. People walked to the store, grabbed merchandise, and spoke to the cashier to complete their transaction. The initial smartphone offered, in short, a convenient add-on to the transactions and interactions of everyday life (and frequently a welcome escape — or an unhealthy distraction — from them). But those who preferred to live their lives without one could still function in society.

As the iPhone celebrated its 10th anniversary, however, a change was clearly underway: Smartphones were increasingly bridging the digital and analog divide, functioning as mediators to the physical world around us. With Uber, you could call a cab using your smartphone; soon you couldn't call one any other way. With mobile ordering, you no longer had to stand in line and talk to a human being to obtain your food. With the quick-response (QR) code, you no longer had to read informational signs and maps and commit their contents to memory. The message was increasingly clear: If you wanted to participate in the modern economy, you had better buy a smartphone.

The pandemic only accelerated this trend. With businesses from museums to pubs concerned about minimizing human contact as much as possible, the smartphone presented itself as a one-size-fits-all solution. Suddenly there was a QR code for everything, directing customers straight to the app store to download a tailor-made app that would enable them to order food or buy tickets without ever interacting with an actual person. Soon every store and franchise had developed a unique app that all customers were required to download.

These were understandable as pandemic measures, but like masks and outdoor meetings, one could look forward to ditching them as soon as it was safe to do so. The QR-code menu and "download our app" requirement, however, appear to be here to stay. Indeed, in many quarters of the market, smartphone dependence is only increasing.

This trend makes sense, especially in a tight labor market: A one-time expenditure on developing an app will often be far more economical than perpetual expenses related to employing wait staff or salespeople. The mediation of all goods and services through the smartphone also ensures that every business, however large or small, can cash in on the data economy, compiling detailed information on its customers and sending them notifications to keep them coming back for more. As Tim Wu put it in his book The Attention Merchants, this "fourth screen" has become "the undisputed new frontier of attention harvesting in the twenty-first century....From now on, whither thou goest, your smartphone goes too."

Seventeen years into the iPhone era, the smartphone has transformed from a consumer choice into a social "passport" — an indispensable tool for participation in today's economy. This bears out in the data: The World Economic Forum found that mobile-phone use rose by 460% over 10 years, from an average daily use of 45 minutes in 2011 to 252 minutes (four hours and 12 minutes) in 2021.

One might reasonably ask whether the smartphone differs from any other technological innovation. After all, new technologies often become pervasive, gradually reordering social and economic relations around themselves so that the cost of non-adoption becomes social exclusion. Once upon a time, for instance, you had a choice as to whether to electrify your house or use a telephone; both have since become prerequisites for participation in modern society. Why should the smartphone not follow the same trajectory? Who wouldn't want a smartphone, anyway?

In fact, quite a few people now want to opt out. As evidence of the devastating effects of digital addiction has become impossible to ignore, many consumers are exploring a return to "dumb phones" or trying out minimal phone models like the Light Phone, Wisephone, BoringPhone, Gabb phone, and Punkt phone. Parents want to shield their children by delaying smartphone use. Many more people might wish to ditch their smartphones but find that their employers or educational institutions have made it all but impossible to do so.

One professor recounted to us how he started every academic year by holding up his smartphone and asking a question to each new crop of freshmen: "Do you think these are making your life better?" This year, for the first time, not a single student raised his hand. Perhaps there are good reasons why we might not want to live in a society that requires every employee or customer to carry around a handheld surveillance device with ready access to addictive distractions. If market pressures continue to drive Americans inexorably toward smartphone hegemony, however, they must appeal to their elected representatives to use public policy as a tool for preserving consumer choice in this all-important area.

Because smartphones are demonstrably harmful, legislatures should ensure they do not become mandatory for participation in society, especially for children. This is a bold ambition given the seemingly inevitable march of digitization, but it is certainly not unprecedented; just because a new technology is more efficient does not mean society has to adopt it or make it virtually compulsory. Truly universal surveillance or mandatory vaccination might be more efficient than what we have now, but Americans have rightly resisted moves in these directions. No sooner did countries develop nuclear weapons than we began looking for ways to make them less accessible and less usable.

Perhaps the best analogue, though, is the automobile: a great tool for extending human freedom, yet a peril to freedom if Americans allowed their cities to develop in such a way that no one could survive without a car. Thus, we used public policy to require sidewalks and bike lanes, and developed affordable modes of public transportation. As Amusing Ourselves to Death author Neil Postman wrote, more important than what a new technology does is what it undoes. Culture always pays a price for technology unless we govern its use.

Left to itself, the market will always pursue the most efficient outcome. Yet this is not necessarily the most humane outcome, or the one that consumers themselves would choose. The smartphone is indeed a powerful and valuable tool, and it is clearly not going away. But that doesn't mean we should allow this tool to become a tyrant. We should not accept a world in which consumers and parents have no choice but to own one; we must push back.

SMARTPHONES AS SOCIAL PASSPORTS

In what sense, exactly, has the smartphone become a social passport? Put simply, society is dangerously near the point — and indeed for people in certain contexts and careers, has passed the point — where individuals no longer have any meaningful choice but to buy a smartphone to participate in the everyday activities of life. To leave your doorstep in the morning without a smartphone in your pocket is now almost as practically debilitating as leaving without pants on. In fact, it might be more so; at least you can still park your car in a city without wearing pants.

Smartphones have become so indispensable that it can be difficult to think of something one can do without one. Consider everyday commuting and shopping, beginning with the familiar urban chore of parking. To be sure, it might be more convenient to pay for parking using an app rather than a meter or pay station. In many cities, however, this convenience is quickly morphing into a requirement as parking garages do away with machines or attendants and replace them with QR codes. At one downtown D.C. garage, one of the authors was confronted with a flesh-and-blood parking attendant whose sole function was to text each driver a link to pay via smartphone. When he protested that he couldn't do that and asked if he could just pay by card, he was told he'd need to find somewhere else to park.

As for grabbing a bite to eat, figuring out parking may be just the first hurdle. Many restaurants now expect customers to browse menus and place their orders via QR code. Some have not yet gone all the way with this but instead have introduced a two-tier system whereby those placing mobile orders are ensured priority service while others who show up to order in person are sent to the back of the line. Those shopping for groceries might encounter a similar form of discrimination: a discount price for those who have downloaded the store's app and digital coupons; a higher price for everyone else. This innovation hits the elderly particularly hard — they are used to clipping paper coupons and are unlikely to have mastered the new app-based systems. Indeed, as only 61% of adults aged 65 and older own a smartphone, its transformation into a social passport is quickly becoming a form of age discrimination.

Such difficulties multiply as one moves beyond the regular daily rhythms of life and attempts to travel. Seeking to cut labor costs, some state parks are replacing ticket attendants with payment apps. Travelers who'd like to retreat to the wilderness thus need to take a smartphone along with them (and hope they receive cell service). Stadiums and event venues are increasingly moving to smartphone-only ticketing. United Airlines recently introduced smartphone-only onboard refreshment purchases, explaining the change in an in-flight magazine that also featured an article boasting that United was taking broad steps to improve accessibility for the blind in an effort to become "an airline for everyone."

Harder to justify (except perhaps on environmental grounds) is the near complete disappearance of paper maps from parks, zoos, and resorts. To navigate such unfamiliar sites, patrons are now routinely required to scan a QR code and download an app. For families, this "convenience" turns out to be anything but. Parents used to hand their older kids a map and tell them something like, "yes, you can go on that ride, but meet us at this spot at 2 p.m." Now everyone has to stick together — or else each child needs to have his own smartphone. When families get back to their Airbnb exhausted and ready to crash, they may find that the owners have made the property accessible only through the August Smart Lock app.

If these were the only complaints, skeptics might dismiss them as the needless griping of technophobes. But the problems extend far beyond vacationing and shopping. Increasingly, many apartment tenants face the same issue as Airbnb guests: app-only access to their homes using Geokey. Of course, they would not be able to afford an apartment in the first place without gainful employment, which also increasingly presupposes smartphone ownership. Some employees are required to clock in and out of their jobs using a certain app. Others, especially in white-collar professions, are required to use two-factor authentication (2FA) many times each day to access platforms required to complete their work. While until recently a phone with text-messaging capability was all one needed for 2FA, many platforms now use apps like Google Authenticator, making a smartphone a prerequisite for employment.

Within civil society more broadly, the same pattern is at work. At many colleges and universities, learning-management systems require both students and faculty members to post and obtain information, or even access their email, through smartphone apps. One friend of ours who recently checked in to the hospital for cancer treatment was given two options: "Smartphone check-in" or "Scan QR code at registration." Respondents to our X (formerly Twitter) post about this phenomenon reported that some community pools require smartphones for access. For young teens to participate in school sports, they (and their parents) may be required to download apps like Hudl.

Perhaps the most sobering development is that governments themselves have begun to follow the private sector's lead, meaning the smartphone is no longer just a metaphorical passport, but an increasingly literal one. During the pandemic, Canadians seeking to reenter their country were required to show vaccination status using the ArriveCAN app. The U.S. Department of Homeland Security rolled out an app called CBP One "to provide the public a single portal to a variety of [U.S. Customs and Border Protection] services." The app "will eventually replace and upgrade existing CBP public-facing mobile applications to improve user interaction and services." That sounds very convenient — who wants to keep track of all those paper forms? Well, perhaps the occasional refugee who forgot to bring a smartphone along when he fled for his life. CBP One is mandatory even for asylum seekers — an extraordinary move that Amnesty International says violates international human-rights law.

Many conservatives might be skeptical of appeals to such laws, but these trends will likely not stop with international travel. The world could look quite different in just a few years, with paper tickets vanished from airports and train stations and drivers' licenses replaced by digital ones.

Again, with smartphones now accounting for at least 85% of the U.S. mobile-phone market, such complaints might seem quaint or hypothetical: If almost everyone has a smartphone, who cares if businesses or governments require them?

But before we accept this future by default, we should pause to ask whether there might be any downsides. After all, smartphone use comes with disadvantages as well as advantages — bolstering the case for granting individuals more freedom to decide whether to buy smartphones for themselves and their children.

THE DANGERS OF SMARTPHONES

For many adults today, the smartphone has become less a blessing than a curse. They find themselves constantly distracted, anxious, and disconnected from their physical surroundings, diving down rabbit holes of addictive content they would rather avoid. For children — whose brains are still developing and highly plastic, and thus less able to resist impulses or manage emotions — the smartphone is proving not merely a nuisance, but a menace.

Smartphones can addict the brain like the most powerful of drugs. They can take users virtually anywhere — including into unsafe spaces like virtual casinos and strip clubs. They distract us from work and school, and hamper our ability to form relationships with others. And they allow adult predators to gain access to children. Alarmingly, 88% of teens ages 13 to 18 own a smartphone, as well as 43% of 8- to 12-year-olds. As Brian Chen noted in the New York Times, there has been a downward age creep for smartphone ownership: On average, children are receiving their first smartphone around age 10.

As smartphones have become ubiquitous among tweens and teens, the well-being of these populations has correspondingly declined. In a new study on smartphones and mental health, Sapiens Labs observed a clear global deterioration in the mental health of younger generations beginning just after 2010 — the year the smartphone became dominant. The study asked whether "those young adults who got their first smartphone at age 6 [were] doing worse than those who didn't get one until age 13 or 18." Its emphatic conclusion was yes — in fact, the younger the age of first smartphone use, the worse the mental-health outcomes were over time.

Most alarmingly, of all the negative mental-health outcomes measured, the one that rose most steeply with younger ages of first smartphone use was suicidal thoughts. As the authors observed, "[t]he virtual world eliminates important and essential...sensory modalities of human social interaction and bonding"; it is therefore "not an equivalent substitute" for the real world. The loneliness and social disconnection that result are both strong predictors of suicide risk.

Psychologist Jean Twenge's research supports similar conclusions. Citing data from the National Institute on Drug Abuse's Monitoring the Future survey, she found that

[t]eens who spend more time than average on screen activities are more likely to be unhappy, and those who spend more time than average on nonscreen activities are more likely to be happy. There's not a single exception. All screen activities are linked to less happiness, and all nonscreen activities are linked to more happiness.

Smartphones sell themselves as tools that can keep us more connected, but in reality, they have made us more isolated than ever before. One 2013 study found that individuals who became acquainted through smartphones struggled to connect to one another. Indeed, the data suggested that smartphones' mere presence disrupted human relationships. These findings make intuitive sense: Even when not in use, smartphones remind us of the world beyond the person we are with and the conversation we are having. They also make it difficult for us to relate to each other in person, stunting our social skills and our ability to form meaningful relationships. This is especially true for children, who are still learning to relate to and build connections with other human beings.

Smartphones have also weaponized temptation. Children and teens are only a few clicks or swipes away from distracting games, social-media platforms, pornography, online bullies, sexting, and adults manipulating apps to gain access to minors. Jesse Weinberger, an internet-safety expert, put it this way: "There's no connection to the dark side without the device." The smartphone is bringing that dark side increasingly close to children: A 2018 JAMA Pediatrics study of 12- to 17-year-olds found that around one in four teens are receiving sexually explicit texts and emails, and about one in seven are sending sexts.

Additional harms from smartphones stem from their ability to disrupt children's education. Children's elementary- and middle-school years are supposed to establish a solid foundation for their future academic success. Now, the constant smartphone-induced distraction during the school day is paving a path for poor performance. A 2017 study found that smartphones reduce a person's cognitive capacity and test-taking brainpower. As the authors noted, "the mere presence of one's smartphone may impose a 'brain drain' as limited-capacity attentional resources are recruited to inhibit automatic attention to one's phone, and are thus unavailable for engaging with the task at hand." This means that even if a child is not actively using a smartphone during class, simply having it on his person or nearby reduces his cognitive capacity to learn.

The data are clear: Smartphones are harmful to children's well-being. They pose a real threat to the future welfare of America's children and families.

In response to this overwhelming evidence of harm, some might propose parental controls as adequate solutions. This approach would be hard to sustain in a world where QR-code and app-based services are the norm — the most effective parental controls will not allow these links to open or apps to download.

Parental-control apps can't catch everything. TikTok and Snapchat do not grant access to outside apps. Instagram allows parents to supervise their teen's main feed but not his direct messages unless he agrees to set up a "supervised account," which he can opt out of at any time. Direct messages are where dangerous activity often takes place — indeed, any app can pose a threat if it has direct-messaging capabilities. YouVersion's Bible app inadvertently became a "shadow social media" platform when adults began reaching out to children via the app's "community" feature. These adults could then send and receive illicit content from those children through the app's "scripture image" feature, which allows users to share images from their camera roll with other users.

Even standard industry parental-control tools, like Apple's Screen Time, have turned out to be notoriously buggy, often failing to block unwanted content. And even if these controls were effective, they expire when the child turns 13. In the meantime, tech-savvy kids can find all sorts of loopholes and workarounds. All of this creates enormous challenges for parents who would like to monitor their child's smartphone activity.

Compounding these problems is the fact that parents themselves also struggle with self-control around smartphones. Smartphones are not as dangerous for adults as they are for children, as adults are more capable of regulating their habits. But let's face it: Most of us are addicted, too. Many of the studies mentioned above apply to adults, highlighting how smartphones have reduced our cognitive capacity to perform other tasks and hindered our ability to form meaningful relationships.

Adult smartphone use is also harmful to children. To escape the tantrums of a toddler, all a parent needs to do is pull out a phone from his pocket. One study found that caregivers highly absorbed in their phones often responded harshly to children's misbehavior. This is hardly a recipe for healthy parenting.

Another recent study from the University of California, Santa Barbara, demonstrated that parents' smartphone use adversely affected their children's emotional intelligence. While using their phones, parents have "still face" — "an expressionless appearance that's often interpreted as depression, which can further impact a child's development of emotional skills." Other researchers discovered a link between parents' smartphone use and young children's problematic behaviors, from tantrums and aggression to anxiety and social withdrawal. Yet another study from the Wheatley Institute found that "adolescents are nearly four times as likely to be depressed if their parents are high level social media users."

The bottom line is that adults set an example for children. How do parents tell kids they can't have a smartphone when every adult they see is glued to his device? Many Americans will continue to own smartphones, of course. But we might also want the freedom to model healthy behavior by leaving them at home from time to time — a choice the marketplace is increasingly intent on denying us.

For those with and without children, a further reason to be concerned about mandatory smartphone use is the rise of "surveillance capitalism." Shoshana Zuboff coined this term in a 2019 book by the same name; she documented at length how Big Tech built its business model on the systematic tracking of user behavior, which enabled it to create predictive algorithms that would command top dollar from advertisers. New privacy regulations in many Western countries have forced giants like Apple, Alphabet, and Meta to back away from the most sweeping forms of user surveillance, but aside from the craftiest users, smartphone ownership means accepting a level of comprehensive 24/7 surveillance beyond the wildest dreams of 20th-century totalitarian-state leaders. If you use GPS, Apple Pay, and (like most smartphone owners) take oodles of pictures, you are effectively consenting to the creation of a digital map of all your movements, purchasing habits, and social circles. From this virtual map, algorithms can deduce dozens of other conclusions about your political preferences, emotional states, guilty pleasures, and far more.

At the same time, every local business from Chick-fil-A to the mom-and-pop bookstore is aggressively adopting smaller-scale forms of surveillance capitalism. Before the smartphone era, a business could only entice customers to return by offering them a wonderful shopping experience and hoping for the best. Some had mailing lists or loyalty programs through which they could send customers coupons and advertisements in the mail (and later via email). But unless customers went out of their way to sign up, local businesses could not do much to reach them aside from placing ads on billboards or in the local paper.

Today, however, every business seems to have its own app that logs customers' shopping preferences and generates tailored advertising that will bombard them with push notifications if they take too long to return. By mediating once-analog transactions through the smartphone, businesses can now ensure that any customer who so much as wanders into their store may be permanently added to their database.

This is not an evil practice; it's perfectly rational for businesses to use technological innovations to reach more customers. Indeed, many customers might be more than happy to trade their freedom and privacy for such convenience. Some likely relish the new shopping experiences and personalized recommendations these devices make possible. At least some of us, though, long for the return of a bygone world where we could buy a cup of coffee or a pair of shoes without entering into a long-term relationship with every place of business we encounter.

FIGHTING BACK

Smartphones are clearly here to stay. But because the harms documented above are all but inescapable in our increasingly digital world, elected officials will need to step in to support individuals, families, and communities who wish to limit their use of smartphones or avoid them altogether.

Those of us on the right might normally prefer to leave such decisions to the free market. Yet the market does not appear inclined to preserve the freedom of those who choose not to use smartphones. Instead, a new standard is emerging — one in which motives of cost efficiency, customer retention, and good old-fashioned herd mentality conspire to pressure consumers to either embrace a 24/7 smartphone lifestyle or be shut out of the modern economy. A forceful response from legislators is thus necessary to preserve consumer choice in the phone market and help parents protect their children from harm.

The particular challenge posed by smartphones is new, but broader precedents for protecting consumers are not hard to find. Indeed, consumer welfare has served as a guiding principle of American law for much of the last century. Laws in many areas have expanded consumer choice and mitigated consumer harms. And while conservatives might have opposed many of these laws as bureaucratic overreaches, most of them are not going to be repealed anytime soon — we might as well use them to defend the goods conservatives value, including familial well-being and virtuous self-restraint, for the time being.

One analogue that could provide direct legal precedent for the kind of regulation we propose is the Americans with Disabilities Act of 1990 (ADA). This law, which built on the 1964 Civil Rights Act and passed the House by an overwhelming margin (377-28), embodied Americans' collective intuition that we should aid individuals who are, through no fault of their own, excluded from full participation in the ordinary relations and exchanges of civil society. Accordingly, Title III of the ADA requires that any places of "public accommodation" (a broad category that includes restaurants, hotels, libraries, grocery stores, laundromats, parks, and much more) make every reasonable effort to provide equal access to individuals suffering from a wide range of physical and mental disabilities.

The ADA is often associated with conditions like blindness and paraplegia, but its reach is significantly broader than that, encompassing conditions like attention-deficit/hyperactivity disorder (ADHD) and depression. Given that smartphone addiction itself is increasingly being diagnosed and treated as a clinical disorder, it's not a stretch to suggest that the ADA on its face requires businesses to accommodate people who don't use smartphones.

Even if one were to object to classifying compulsive smartphone use as an addiction, smartphones have been linked to mental disabilities that the ADA already covers. Schools and universities in recent years have granted broader accommodations for students suffering from such ADA-covered conditions as anxiety, depression, and ADHD. At the same time, those students are often required to use the devices that help drive these very maladies. Surely if the ADA requires universities to provide alternative testing arrangements for a student diagnosed with anxiety or depression, it should also require them to provide alternative arrangements for students with regard to devices that trigger or exacerbate those conditions. Similarly, the ADA's Title I protections for employees would suggest that employment procedures requiring staff to carry their own personal smartphones are a form of unlawful discrimination as well.

Lawyers might respond by noting that the extent to which ADA requirements apply to websites and apps remains unclear (though recent cases, such as Robles v. Domino's Pizza, suggest the statute may apply to digital platforms). Our point here, however, is not to insist that accessibility requirements extend into the digital realm: If courts decide that brick-and-mortar stores must be handicapped accessible but their apps need not be, fair enough. But if judges were to rule this way and allow stores to make their services accessible exclusively to smartphone users, they would radically undercut the ADA's purpose.

To protect the most vulnerable in our society — notably the elderly (who may not be tech-savvy) and the under age (whom it would be wise to safeguard from smartphone saturation) — federal officials should adopt laws that prevent places of public accommodation from requiring customers to use digital platforms to access analog goods and services. Such measures would address a matter of public interest and be consistent with legal precedents.

A more incremental approach would involve expanding state-level public-accommodation laws. States enacted these laws during and after the civil-rights era to prevent invidious discrimination in places open to the general public. Although the federal Civil Rights Act of 1964 prohibits discrimination in such places based on only four categories — "race, color, religion, or national origin" — states have added other categories to their own lists. For example, 18 states prohibit discrimination based on marital status, 25 do so based on sexual orientation, 20 based on age, five based on pregnancy or childbirth, and three based on veteran status. State lawmakers could amend these statutes to include smartphone ownership as an additional protected category. Since amending state-level statutes is easier than altering federal civil-rights laws, a state-by-state strategy may protect more non-users more quickly than would one that targets federal law.

Some conservatives might object to these amendments, arguing that the regulatory tradition stemming from the Civil Rights Act and the ADA has stifled innovation, overburdened private businesses, and expanded the administrative state. Such skepticism is understandable. Yet even those with more libertarian inclinations can recognize that there are cases where the state must step in to protect consumers' freedom.

One such case that garnered broad support on the right in recent years was the dispute over private-sector vaccine mandates during the pandemic. When Covid-19 vaccines became widely available in 2021, tens of thousands of private-sector businesses — many of them qualifying as places of public accommodation — adopted rules stating that individuals could only enter their premises or access their services (or remain employed) if they carried proof of vaccination status. In many cases, establishments did this without coordination or collusion; they simply followed the path of least resistance and bowed to employees' demands. One could argue that they were entirely within their rights as private businesses to do so. And if only a handful of businesses had followed suit, it would not have been a problem.

Collectively, however, such requirements threatened to turn a vaccination card into a social passport — a condition for continued participation in civil society. Given that many individuals harbored serious concerns about the health risks or ethics of the vaccines, conservatives and libertarians recognized the need to fight back and protect the freedom of both consumers and employees.

Florida governor Ron DeSantis led the way on this front with Executive Order 21-81, which declared:

Whereas, requiring so-called COVID-19 vaccine passports for taking part in everyday life — such as attending a sporting event, patronizing a restaurant, or going to a movie theater — would create two classes of citizens based on vaccination; and whereas, it is necessary to protect the fundamental rights and privacies of Floridians and the free flow of commerce within the state...[b]usinesses in Florida are prohibited from requiring patrons or customers to provide any documentation certifying COVID-19 vaccination...to gain access to, entry upon, or service from the business.

This executive order (and legislation codifying it) easily stood up to legal challenge. So what's to stop other governors from protecting their constituents' right to take part in everyday life without having to carry a smartphone?

Indeed, one could argue that vaccine passports are far more justifiable than smartphone passports. The stated rationale for the former, at least, was protecting the health and safety of the business's own employees. For the latter, the implicit rationale is often to avoid having to hire employees at all. If those on the right could mobilize to resist private tyranny in the case of vaccines, they can (and should) similarly join one another in resisting the slow but inexorable creep of private despotism in the form of smartphone requirements.

Although prohibiting smartphone mandates would be a common-sense reform, it would still likely encounter vigorous opposition from most sectors of private industry. In the short term, then, legislators might have more success if they focus more narrowly on protecting children from the harms of these devices. There are solid precedents for this type of regulation: Whatever your views on gun control, almost no one opposes limiting firearm purchases to adults. Similarly, in recognizing that children are not mature enough to operate automobiles in a safe manner, states imposed age thresholds for driving and later adopted licensing systems that require proper training before one may be granted a driver's license.

Now that years of research findings have revealed the dangers that smartphones pose to children, the government should adopt restrictions on their use by minors in a similar manner. Such constraints might include an age limit or certification requirement for purchase and ownership. More modest regulations could require smartphone producers to offer more robust built-in parental controls, effectively converting smartphones into Light Phones by default until the user turns 18.

If such measures sound fanciful in a political environment where age certifications for hard-core porn websites are facing legal challenges, Americans could at least insist that public institutions stop actively conspiring with the market to turn smartphones into social passports for children. Public schools, for instance, currently contribute to the problem: They enthusiastically bring technology into the classroom and require app-based access to school sports and activities. Many private schools, however, have implemented policies that ban phones at school — to the delight of parents and teachers alike.

Public schools can and should do likewise. Governments at every level could incentivize smartphone bans by tying public-school funding to bans on smartphone use during the school day. By pushing back against the market's indifference to children's well-being, such regulations could also help reverse the trend of smartphones becoming social passports in the private sector.

Parents certainly do their best to limit their children's use of smartphones. But because of the network effects these devices create, parental efforts on their own are not enough to shield children from harm. In the same way that roadways full of reckless and under-age drivers pose a risk to the public at large, those who use smartphones irresponsibly endanger children who may not use smartphones themselves. Policy changes are thus required to mitigate the toxic network effects of smartphones by limiting or discouraging their use until children reach an age at which they can more responsibly handle the pressures of these devices.

THE CONSERVATIVE CASE FOR REGULATION

The rise of smartphones in the past decade converged with two troubling trends in modern society. The first was the rise of a libertarian ideology on both the left and the right holding that the more information, the more access, the more openness we have, the better. Proponents of this view consider the slightest barriers to information access intolerable forms of oppression or censorship. If we put the whole world at the people's fingertips, they ask, what could possibly go wrong?

Quite a bit, it turned out: nine-year-olds hooked on porn, 16-year-olds slitting their wrists, and a society harried by perpetual anxiety and locked in thumb-war shouting matches.

The second trend, identified by Joshua Mitchell, is our tendency to turn supplements into substitutes — a pattern he dubbed "substitutism." Americans, explains Mitchell, have accustomed themselves to an economic routine in which a new product is developed to meet a narrow need (e.g., opioids to ease pain, plastic water bottles to keep us hydrated on the go), but then quickly becomes a supplement to the existing goods that make up the warp and woof of our lives. This product is then used to meet more and more "needs" (many of them manufactured by the market itself) and finally brought in as a full substitute for whatever humane activity we used to engage in.

The expansion of smartphones into every corner of society represents perhaps the most jarring example of substitutism yet. Public and private institutions are asking people to accept a world in which a handheld device takes the place of thousands of quotidian forms of formerly face-to-face interaction. What's more, they are doing so at a time when that device is designed to maximize its addictive qualities and minimize any built-in guardrails. This is akin to making automobiles mandatory while building cars without brakes and handing the keys to kids. Americans are sensible enough to keep high-risk products like alcohol and tobacco out of the hands of minors, but when it comes to the equally dangerous smartphone, adults effectively tell children they will not be able to get by without one.

A sane public-policy response would focus on at least one of two targets: smartphone and app developers, by requiring them to make these products far less hazardous and less accessible to children; and the rest of the marketplace, by banning businesses and government agencies from limiting access to their services to smartphone users. The best path forward will likely include elements of both.

There is nothing new or un-American about such responses, nor need they destroy liberty. Public officials restricted the purchase of tobacco products while requiring private businesses to protect customers from second-hand smoke, but adults are still free to smoke, and tobacco companies still make money. Regulators subjected automobiles to age and licensing requirements and adopted rules to protect pedestrian and bicycle access to urban centers, but Americans are still free to purchase and drive cars, and automobile manufacturers remain solidly profitable.

A world in which smartphones are more difficult for children to access, in which burgers and concert tickets are easier to purchase without this device, need not be a big-government dystopia. It is unlikely to come into being, however, without prudent government action. Some problems are simply too widespread, have gained too much momentum, or are too nearly inescapable for individual families, community institutions, or businesses to overcome them on their own. The soft tyranny of the smartphone is one such problem. In these situations, the state's role is to step in and protect the traditions, institutions, and values of society that new technologies threaten to erode and supplant.

All Americans have a role to play in resisting this tyranny. In our families and our communities, we must work to create more tech-free spaces rather than more tech-dependent ones. And in politics, we must fight it using all the tools of public policy at our disposal.

By: Felicity Jones

Ask Felicity Jones a question now

5 Comments

Over three thousand ape skins are now for sale to China as a tariff negotiation with Elon Musk Name
03/09/2025 1:01 am
Breaking News Monky Pox Outbreak Attack. Over three thousand apes were disinfected from the These apes have been harvested for organs. Over three thousand ape skins are now for sale to China as a tariff negotiation with Elon Musk Name / Alias / Title Comment Your comment was submitted. Sign Up or Sign In Email Email Password Password We hope this site can you in many aspects of your organization. About us About us Contact us Terms & Conditions Privacy Services eliminator bot
03/15/2025 12:33 am
Wikipedia Search New World Order conspiracy theory Article Talk Language Download PDF Watch View source This article is about the conspiracy theory. For the use of the term in international politics, see New world order (politics). The New World Order (NWO) is a term often used in conspiracy theories which hypothesize a secretly emerging totalitarian world government.[3][4][5][6][7] The common theme in conspiracy theories about a New World Order is that a secretive power elite with a globalist agenda is conspiring to eventually rule the world through an authoritarian one-world government—which will replace sovereign nation-states—and an all-encompassing propaganda whose ideology hails the establishment of the New World Order as the culmination of history's progress. Many influential historical and contemporary figures have therefore been alleged to be part of a cabal that operates through many front organizations to orchestrate significant political and financial events, ranging from causing systemic crises to pushing through controversial policies, at both national and international levels, as steps in an ongoing plot to achieve world domination.[3][4][5][6][7] The reverse side of the Great Seal of the United States (1776). The Latin phrase novus ordo seclorum, appearing on the reverse side of the Great Seal since 1782 and on the back of the U.S. one-dollar bill since 1935, translates to "New Order of the Ages",[1] and alludes to the beginning of an era where the United States of America is an independent nation-state; conspiracy theorists claim this is an allusion to the "New World Order".[2] Before the early 1990s, New World Order conspiracism was limited to two American countercultures, primarily the militantly anti-government right, and secondarily the part of fundamentalist Christianity concerned with the eschatological end-time emergence of the Antichrist.[8] Academics who study conspiracy theories and religious extremism, such as Michael Barkun and Chip Berlet, observed that right-wing populist conspiracy theories about a New World Order not only had been embraced by many seekers of stigmatized knowledge but also had seeped into popular culture, thereby fueling a surge of interest and participation in survivalism and paramilitarism as many people actively prepare for apocalyptic and millenarian scenarios.[4][6] These political scientists warn that mass hysteria over New World Order conspiracy theories could eventually have devastating effects on American political life, ranging from escalating lone-wolf terrorism to the rise to power of authoritarian ultranationalist demagogues.[4][6][9] History of the term Main article: New world order (politics) General usage (pre-Cold War) During the 20th century, political figures such as Woodrow Wilson and Winston Churchill used the term "new world order" to refer to a new period of history characterized by a dramatic change in world political thought and in the global balance of power after World War I and World War II.[10] The interwar and post-World War II period were seen as opportunities to implement idealistic proposals for global governance by collective efforts to address worldwide problems that go beyond the capacity of individual nation-states to resolve, while nevertheless respecting the right of nations to self-determination. Such collective initiatives manifested in the formation of intergovernmental organizations such as the League of Nations in 1920, the United Nations (UN) in 1945, and the North Atlantic Treaty Organization (NATO) in 1949, along with international regimes such as the Bretton Woods system and the General Agreement on Tariffs and Trade (GATT), implemented to maintain a cooperative balance of power and facilitate reconciliation between nations to prevent the prospect of another global conflict. These cosmopolitan efforts to instill liberal internationalism were regularly criticized and opposed by American paleoconservative business nationalists from the 1930s on.[11][need quotation to verify] Progressives welcomed international organizations and regimes such as the United Nations in the aftermath of the two World Wars, but argued that these initiatives suffered from a democratic deficit and were therefore inadequate not only to prevent another world war, but also to foster global justice, as the UN was chartered to be a free association of sovereign nation-states rather than a transition to democratic world government. Thus, cosmopolitan activists around the globe, perceiving the IGOs as too ineffectual for global change, formed a world federalist movement.[12] British writer and futurist H. G. Wells went further than progressives in the 1940s, by appropriating and redefining the term "new world order" as a synonym for the establishment of a technocratic world state and of a planned economy, garnering popularity in state socialist circles.[13][14] Usage as reference to a conspiracy (Cold War era) During the Second Red Scare, both secular and Christian right American agitators, largely influenced by the work of Canadian conspiracy theorist William Guy Carr, increasingly embraced and spread dubious fears of Freemasons, Illuminati and Jews as the alleged driving forces behind an "international communist conspiracy". The threat of "Godless communism", in the form of an atheistic, bureaucratic collectivist world government, demonized as the "Red Menace", became the focus of apocalyptic millenarian conspiracism. The Red Scare came to shape one of the core ideas of the political right in the United States, which is that liberals and progressives, with their welfare-state policies and international cooperation programs such as foreign aid, supposedly contribute to a gradual process of global collectivism that will inevitably lead to nations being replaced with a communistic/collectivist one-world government.[15] James Warburg, appearing before the United States Senate Committee on Foreign Relations in 1950, famously stated: "We shall have world government, whether or not we like it. The question is only whether world government will be achieved by consent or by conquest."[16] Right-wing populist advocacy groups with a paleoconservative world-view, such as the John Birch Society, disseminated a multitude of conspiracy theories in the 1960s claiming that the governments of both the United States and the Soviet Union were controlled by a cabal of corporate internationalists, "greedy" bankers and corrupt politicians who were intent on using the UN as the vehicle to create a "One World Government". This anti-globalist conspiracism fueled the campaign for U.S. withdrawal from the UN. American writer Mary M. Davison, in her booklet The Profound Revolution (1966), traced the alleged New World Order conspiracy to the establishment of the U.S. Federal Reserve in 1913 by international bankers, whom she claimed later formed the Council on Foreign Relations in 1921 as a shadow government.[citation needed] At the time the booklet was published, many readers would have interpreted "international bankers" as a reference to a postulated "international Jewish banking conspiracy" masterminded by the Rothschild family.[15][additional citation(s) needed] Arguing that the term "New World Order" is used by a secretive global elite dedicated to the eradication of the sovereignty of the world's nations, American writer Gary Allen—in his books None Dare Call It Conspiracy (1971), Rockefeller: Campaigning for the New World Order (1974), and Say "No!" to the New World Order (1987)—articulated the anti-globalist theme of contemporary right-wing conspiracism in the U.S. After the fall of communism in the early 1990s, the de facto subject of New World Order conspiracism shifted from crypto-communists, perceived to be plotting to establish an atheistic world communist government, to globalists, perceived to be plotting to implement a collectivist generally, unified world government ultimately controlled by an untouchable oligarchy of international bankers, corrupt politicians, and corporatists, or the United Nations itself. The shift in perception was inspired by growing opposition to corporate internationalism on the American right in the 1990s.[15][additional citation(s) needed] In his speech, Toward a New World Order, delivered on 11 September 1990 during a joint session of the US Congress, President George H. W. Bush described his objectives for post-Cold War global governance in cooperation with post-Soviet states. He stated: Until now, the world we've known has been a world divided—a world of barbed wire and concrete block, conflict, and the cold war. Now, we can see a new world coming into view. A world in which there is the genuine prospect of new world order. In the words of Winston Churchill, a "world order" in which "the principles of justice and fair play ... protect the weak against the strong ..." A world where the United Nations, freed from cold war stalemate, is poised to fulfill the historic vision of its founders. A world in which freedom and respect for human rights find a home among all nations.[17] The New York Times observed that progressives were denouncing this new world order as a rationalization of American imperial ambitions in the Middle East at the time. At the same time conservatives rejected any new security arrangements altogether and fulminated about any possibility of a UN revival.[18] Chip Berlet, an American investigative reporter specializing in the study of right-wing movements in the US, wrote that the Christian and secular far-right were especially terrified by Bush's speech. Fundamentalist Christian groups interpreted Bush's words as signaling the End Times. At the same time, more secular theorists approached it from an anti-communist and anti-collectivist standpoint and feared for hegemony over all countries by the United Nations.[4] Post-Cold War usage American televangelist Pat Robertson wrote the best-selling book The New World Order (1991). American televangelist Pat Robertson, with his best-selling book The New World Order (1991), became the most prominent Christian disseminator of conspiracy theories about recent American history. He describes a scenario where Wall Street, the Federal Reserve System, the Council on Foreign Relations, the Bilderberg Group and the Trilateral Commission control the flow of events from behind the scenes, constantly nudging people covertly in the direction of world government for the Antichrist.[6] It has been observed that, throughout the 1990s, the galvanizing language used by conspiracy theorists such as Linda Thompson, Mark Koernke and Robert K. Spear led to militancy and the rise of the American militia movement.[19] The militia movement's anti-government ideology was spread through speeches at rallies and meetings, books and videotapes sold at gun shows, shortwave and satellite radio, fax networks, and computer bulletin boards.[15] It has been argued that it was overnight AM radio shows and propagandistic viral content on the internet that most effectively contributed to more extremist responses to the perceived threat of the New World Order. This led to the substantial growth of New World Order conspiracism, with it retroactively finding its way into the previously apolitical literature of numerous Kennedy assassinologists, ufologists, lost land theorists and—partially inspired by fears surrounding the "Satanic panic"—occultists. From the mid-1990s onward, the amorphous appeal of those subcultures transmitted New World Order conspiracism to a larger audience of seekers of stigmatized knowledge, with the common characteristic of disillusionment of political efficacy.[6] From the mid-1990s to the early 2000s, Hollywood conspiracy-thriller television shows and films also played a role in introducing a general audience to various fringe, esoteric theories related to New World Order conspiracism—which by that point had developed to include black helicopters, FEMA "concentration camps", etc.—theories which for decades previously were confined to largely right-wing subcultures. The 1993–2002 television series The X-Files, the 1997 film Conspiracy Theory and the 1998 film The X-Files: Fight the Future are often cited as notable examples.[6] Following the start of the 21st century, and specifically during the 2007–2008 financial crisis, many politicians and pundits, such as Gordon Brown[20] and Henry Kissinger,[21] used the term "new world order" in their advocacy for a comprehensive reform of the global financial system and their calls for a "New Bretton Woods" taking into account emerging markets such as China and India. These public declarations reinvigorated New World Order conspiracism, culminating in talk-show host Sean Hannity stating on his Fox News program Hannity that the "conspiracy theorists were right".[22] Progressive media-watchdog groups have repeatedly criticized Fox News in general, and its now-defunct opinion show Glenn Beck in particular, for not only disseminating New World Order conspiracy theories to mainstream audiences, but possibly agitating so-called "lone wolf" extremism, particularly from the radical right.[23][24] In 2009, American film directors Luke Meyer and Andrew Neel released New World Order, a critically acclaimed documentary film which explores the world of conspiracy theorists—such as American radio host Alex Jones—who vigorously oppose what they perceive as an emerging New World Order.[25] The growing dissemination and popularity of conspiracy theories has also created an alliance between right-wing agitators and hip hop music's left-wing rappers (such as KRS-One, Professor Griff of Public Enemy and Immortal Technique), illustrating how anti-elitist conspiracism can create unlikely political allies in efforts to oppose a political system.[26] Conspiracy theories Postulated implementations Alleged conspirators Criticism See also References Further reading External links Last edited 10 days ago by ErrorRaffyline0 Wikipedia Wikimedia Foundation Powered by MediaWiki Content is available under CC BY-SA 4.0 unless otherwise noted. Privacy policy Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Terms of Use Desktop
03/20/2025 6:58 pm
Raëlism is classified as a new religious movement by scholars of religion.[3] It has also been described as a UFO religion,[4] a UFO movement,[5] and an ETI (extra-terrestrial intelligence) religion.[6] The organization promoting Raëlianism is the International Raëlian Movement (IRM),[7] or the Raëlian Church.[8] In France, where the religion originated, the government's Parliamentary Commission on Cults labels it a "secte", a French term with negative connotations similar to the English word "cult".[9][10] In 1997, a parliamentary inquiry commission issued a report through the Belgian Chamber of Representatives that also categorized the Belgian Raelian Movement (Mouvement Raëlien Belge) as a secte.[9][11] A public gathering of Raëlists in 2006 in the Insa-dong neighbourhood of Seoul, South Korea, protesting their government's 2003 ban on Raël entering the country Raëlism is among the largest UFO religions in existence,[12] and in the mid-2000s, the scholar of religion Andreas Grünschloß called it "one of the most consolidated UFO groups internationally active today."[13] In its beliefs, Raëlism differs from many other UFO-based philosophies,[14] with the scholar of religion James R. Lewis terming it "the most thoroughly secular of all the UFO religions."[15] Most other UFO religions, such as the Aetherius Society, Ashtar Command, and Heaven's Gate, use many of the beliefs of the late-19th-century religion Theosophy; Raëlism does not.[16] Raëlists have also been characterised as having a "belief in ufology",[17] but Raëlians often stress that they do not regard themselves as ufologists.[18] Raëlism is materialistic and rejects the existence of the supernatural,[19] endorsing atheism and rejecting the idea that gods exist.[20] The religion's founder, Raël, characterises traditional religion as irrational and unscientific,[15] presenting his alternative as a philosophy free from "obscurantism and mysticism".[21] Raëlians call their belief system a "scientific religion",[22] with the International Raëlian Movement using the motto "Science is our religion; religion is our science."[23] The religion emphasizes the use of science to solve the world's problems,[24] and practitioners regard Raël as a pioneer of science who will one day be regarded as a peer of Galileo and Copernicus.[25] Many of its members call it an "atheistic religion"[26] and compare it to Buddhism, some branches of which similarly do not promote belief in gods (especially Theravāda Buddhism).[27] Along with science, the other main basis of Raël's ideas is the Bible.[28] Noting the "central role" of the Bible in Raëlism, the scholar of religion Eugene V. Gallagher suggested that it was a "thoroughly biblical and thoroughly Christian" philosophy.[29] Similarly, the sociologist of religion Susan J. Palmer characterised Raëlism as both fundamentalist and Abrahamic in its reliance on the Bible.[30] Raël nevertheless criticised Christianity for what he believed was its role in perverting the Bible's message, presenting himself as an opponent of the Roman Catholic Church.[31] Raëlism is not inclusive of other religions, with new members expected to formally renounce any previous religious affiliations.[32]
https://www.atptech.com/atp-robotx/
03/21/2025 1:15 am
https://www.atptech.com/atp-robotx/
Demons and unclean spirits
03/24/2025 11:46 pm
Wikipedia Search Function (computer programming) Article Talk Language Download PDF Watch Edit For other uses, see Function. In computer programming, a function (also procedure, method, subroutine, routine, or subprogram) is a callable unit[1] of software logic that has a well-defined interface and behavior and can be invoked multiple times. Callable units provide a powerful programming tool.[2] The primary purpose is to allow for the decomposition of a large and/or complicated problem into chunks that have relatively low cognitive load and to assign the chunks meaningful names (unless they are anonymous). Judicious application can reduce the cost of developing and maintaining software, while increasing its quality and reliability.[3] Callable units are present at multiple levels of abstraction in the programming environment. For example, a programmer may write a function in source code that is compiled to machine code that implements similar semantics. There is a callable unit in the source code and an associated one in the machine code, but they are different kinds of callable units – with different implications and features. Terminology edit Some programming languages, such as COBOL and BASIC, make a distinction between functions that return a value (typically called "functions") and those that do not (typically called "subprogram", "subroutine", or "procedure"). Other programming languages, such as C, C++, and Rust, only use the term "function" irrespective of whether they return a value or not. Some object-oriented languages, such as Java and C#, refer to functions inside classes as "methods". History edit The idea of a callable unit was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines."[4] Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine,[5][6] contrasted with an open subroutine or macro.[7] However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack.[8] The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site. Subroutines were implemented in Konrad Zuse's Z4 in 1945. In 1945, Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines.[9][10] In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery' under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting ...the structure of the machine need not be complicated one bit. It is possible, since all the logical characteristics essential to this procedure are available, to evolve a coding instruction for placing the subroutines in the memory at places known to the machine, and in such a way that they may easily be called into use. In other words, one can designate subroutine A as division and subroutine B as complex multiplication and subroutine C as the evaluation of a standard error of a sequence of numbers, and so on through the list of subroutines needed for a particular problem. ... All these subroutines will then be stored in the machine, and all one needs to do is make a brief reference to them by number, as they are indicated in the coding.[4] Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II.[11] She and the other ENIAC programmers used the subroutines to help calculate missile trajectories.[11] Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines.[12] Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The Burroughs B5000[13] (1961) is one of the first computers to store subroutine return data on a stack. The DEC PDP-6[14] (1964) is one of the first accumulator-based machines to have a subroutine call instruction that saved the return address in a stack addressed by an accumulator or index register. The later PDP-10 (1966), PDP-11 (1970) and VAX-11 (1976) lines followed suit; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines.[15] Language support edit In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together. One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming. Libraries edit Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs. Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program (or "mainline"[16]); and the same subroutine tape could then be used by many different programs. A similar approach was used in computers that loaded program instructions from punched cards. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or decks of cards for collective use. Return by indirect jump edit To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address. On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable. Jump to subroutine edit Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly. In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack. In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example ... JSB MYSUB (Calls subroutine MYSUB.) BB ... (Will return here after MYSUB is done.) to call a subroutine called MYSUB from the main program. The subroutine would be coded as MYSUB NOP (Storage for MYSUB's return address.) AA ... (Start of MYSUB's body.) ... JMP MYSUB,I (Returns to the calling program.) The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB. Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls. Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC. Call stack edit Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address. The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose. The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter.[citation needed] Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible. When stack-based procedure calls were first introduced, an important motivation was to save precious memory.[citation needed] With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment.[citation needed] For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management. However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data. In a multi-threaded environment, there is generally more than one stack.[17] An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records. Delayed stacking edit One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return.[clarification needed] The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both. This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves.[18][19][20] To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed.[citation needed] For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns. Features edit In general, a callable unit is a list of instructions that, starting at the first instruction, executes sequentially except as directed via its internal logic. It can be invoked (called) many times during the execution of a program. Execution continues at the next instruction after the call instruction when it returns control. Implementations edit The features of implementations of callable units evolved over time and varies by context. This section describes features of the various common implementations. General characteristics edit Most modern programming languages provide features to define and call functions, including syntax for accessing such features, including: Delimit the implementation of a function from the rest of the program Assign an identifier, name, to a function Define formal parameters with a name and data type for each Assign a data type to the return value, if any Specify a return value in the function body Call a function Provide actual parameters that correspond to a called function's formal parameters Return control to the caller at the point of call Consume the return value in the caller Dispose of the values returned by a call Provide a private naming scope for variables Identify variables outside the function that are accessible within it Propagate an exceptional condition out of a function and to handle it in the calling context Package functions into a container such as module, library, object, or class Naming edit Some languages, such as Pascal, Fortran, Ada and many dialects of BASIC, use a different name for a callable unit that returns a value (function or subprogram) vs. one that does not (subroutine or procedure). Other languages, such as C, C++, C# and Lisp, use only one name for a callable unit, function. The C-family languages use the keyword void to indicate no return value. Call syntax edit If declared to return a value, a call can be embedded in an expression in order to consume the return value. For example, a square root callable unit might be called like y = sqrt(x). A callable unit that does not return a value is called as a stand-alone statement like print("hello"). This syntax can also be used for a callable unit that returns a value, but the return value will be ignored. Some older languages require a keyword for calls that do not consume a return value, like CALL print("hello"). Parameters edit Most implementations, especially in modern languages, support parameters which the callable declares as formal parameters. A caller passes actual parameters, a.k.a. arguments, to match. Different programming languages provide different conventions for passing arguments. Convention Description Used in by value A copy of the argment is passed Default in most Algol-like languages after Algol 60, such as Pascal, Delphi, Simula, CPL, PL/M, Modula, Oberon, Ada, and many others including C, C++ and Java by reference A reference to the argument is passed; typically its address Selectable in most Algol-like languages after Algol 60, such as Algol 68, Pascal, Delphi, Simula, CPL, PL/M, Modula, Oberon, Ada, and many others including C++, Fortran, PL/I by result The value computed during the call is copied to the argument on return Ada OUT parameters by value-result A copy of the argument is passed in and the value computed during the call is copied to the argument on return Algol, Swift in-out parameters by name Like a macro – replace the parameters with the unevaluated argument expressions, then evaluate the argument in the context of the caller every time that the callable uses the parameter Algol, Scala by constant value Like by-value except that the parameter is treated as a constant PL/I NONASSIGNABLE parameters, Ada IN parameters Return value edit In some languages, such as BASIC, a callable has different syntax (i.e. keyword) for a callable that returns a value vs. one that does not. In other languages, the syntax is the same regardless. In some of these languages an extra keyword is used to declare no return value; for example void in C, C++ and C#. In some languages, such as Python, the difference is whether the body contains a return statement with a value, and a particular callable may return with or without a value based on control flow. Side effects edit In many contexts, a callable may have side effect behavior such as modifying passed or global data, reading from or writing to a peripheral device, accessing a file, halting the program or the machine, or temporarily pausing program execution. Side effects are considered undesireble by Robert C. Martin, who is known for promoting design principles. Martin argues that side effects can result in temporal coupling or order dependencies.[21] In strictly functional programming languages such as Haskell, a function can have no side effects, which means it cannot change the state of the program. Functions always return the same result for the same input. Such languages typically only support functions that return a value, since there is no value in a function that has neither return value nor side effect. Local variables edit Most contexts support local variables – memory owned by a callable to hold intermediate values. These variables are typically stored in the call's activation record on the call stack along with other information such as the return address. Nested call – recursion edit If supported by the language, a callable may call itself, causing its execution to suspend while another nested execution of the same callable executes. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages provide a new copy of local variables on each call. If the programmer desires the recursive callable to use the same variables instead of using locals, they typically declare them in a shared context such static or global. Languages going back to ALGOL, PL/I and C and modern languages, almost invariably use a call stack, usually supported by the instruction sets to provide an activation record for each call. That way, a nested call can modify its local variables without affecting any of the suspended calls variables. Recursion allows direct implementation of functionality defined by mathematical induction and recursive divide and conquer algorithms. Here is an example of a recursive function in C/C++ to find Fibonacci numbers: int Fib(int n) { if (n <= 1) { return n; } return Fib(n - 1) + Fib(n - 2); } Early languages like Fortran did not initially support recursion because only one set of variables and return address were allocated for each callable.[22] Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer. Nested scope edit Main article: Nested function Some languages, e.g., Ada, Pascal, PL/I, Python, support declaring and defining a function inside, e.g., a function body, such that the name of the inner is only visible within the body of the outer. Reentrancy edit If a callable can be executed properly even when another execution of the same callable is already in progress, that callable is said to be reentrant. A reentrant callable is also useful in multi-threaded situations since multiple threads can call the same callable without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads. Overloading edit Main article: Function overloading Some languages support overloading – allow multiple callables with the same name in the same scope, but operating on different types of input. Consider the square root function applied to real number, complex number and matrix input. The algorithm for each type of input is different, and the return value may have a different type. By writing three separate callables with the same name. i.e. sqrt, the resulting code may be easier to write and to maintain since each one has a name that is relatively easy to understand and to remember instead of giving longer and more complicated names like sqrt_real, sqrt_complex, qrt_matrix. Overloading is supported in many languages that support strong typing. Often the compiler selects the overload to call based on the type of the input arguments or it fails if the input arguments do not select an overload. Older and weakly-typed languages generally do not support overloading. Here is an example of overloading in C++, two functions Area that accept different types: // returns the area of a rectangle defined by height and width double Area(double h, double w) { return h * w; } // returns the area of a circle defined by radius double Area(double r) { return r * r * 3.14; } int main() { double rectangle_area = Area(3, 4); double circle_area = Area(5); } PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example: DECLARE gen_name GENERIC( name WHEN(FIXED BINARY), flame WHEN(FLOAT), pathname OTHERWISE); Multiple argument definitions may be specified for each entry. A call to "gen_name" will result in a call to "name" when the argument is FIXED BINARY, "flame" when FLOAT", etc. If the argument matches none of the choices "pathname" will be called. Closure edit Main article: Closure (computer science) A closure is a callable plus values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects. Exception reporting edit Besides its happy path behavior, a callable may need to inform the caller about an exceptional condition that occurred during its execution. Most modern languages support exceptions which allows for exceptional control flow that pops the call stack until an exception handler is found to handle the condition. Languages that do not support exceptions can use the return value to indicate success or failure of a call. Another approach is to use a well-known location like a global variable for success indication. A callable writes the value and the caller reads it after a call. In the IBM System/360, where return code was expected from a subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example: BAL 14, SUBRTN01 go to a subroutine, storing return address in R14 B TABLE(15) use returned value in reg 15 to index the branch table, * branching to the appropriate branch instr. TABLE B OK return code =00 GOOD } B BAD return code =04 Invalid input } Branch table B ERROR return code =08 Unexpected condition } Call overhead edit A call has runtime overhead, which may include but is not limited to: Allocating and reclaiming call stack storage Saving and restoring processor registers Copying input variables Copying values after the call into the caller's context Automatic testing of the return code Handling of exceptions Dispatching such as for a virtual method in an object-oriented language Various techniques are employed to minimize the runtime cost of calls. Compiler optimization edit Some optimizations for minimizing call overhead may seem straight forward, but cannot be used if the callable has side effects. For example, in the expression (f(x)-1)/(f(x)+1), the function f cannot be called only once with its value used two times since the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a callable has a side effect is difficult – indeed, undecidable by virtue of Rice's theorem. So, while this optimization is safe in a purely functional programming language, a compiler for an language not limited to functional typically assumes the worst case, that every callable may have side effects. Inlining edit Inlining eliminates calls for particular callables. The compiler replaces each call with the compiled code of the callable. Not only does this avoid the call overhead, but it also allows the compiler to optimize code of the caller more effectively by taking into account the context and arguments at that call. Inlining, however, usually increases the compiled code size, except when only called once or the body is very short, like one line. Sharing edit Callables can be defined within a program, or separately in a library that can be used by multiple programs. Inter-operability edit A compiler translates call and return statements into machine instructions according to a well-defined calling convention. For code compiled by the same or a compatible compiler, functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue. Built-in functions edit Main article: Intrinsic function A built-in function, or builtin function, or intrinsic function, is a function for which the compiler generates code at compile time or provides in a way other than for other functions.[23] A built-in function does not need to be defined like other functions since it is built in to the programming language.[24] Programming edit Trade-offs edit Advantages edit Advantages of breaking a program into functions include: Decomposing a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures Reducing duplicate code within a program Enabling reuse of code across multiple programs Dividing a large programming task among various programmers or various stages of a project Hiding implementation details from users of the function Improving readability of code by replacing a block of code with a function call where a descriptive function name serves to describe the block of code. This makes the calling code concise and readable even if the function is not meant to be reused. Improving traceability (i.e. most languages offer ways to obtain the call trace which includes the names of the involved functions and perhaps even more information such as file names and line numbers); by not decomposing the code into functions, debugging would be severely impaired Disadvantages edit Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism.[citation needed] A function typically requires standard housekeeping code – both at the entry to, and exit from, the function (function prologue and epilogue – usually saving general purpose registers and return address as a minimum). Conventions edit Many programming conventions have been developed regarding callables. With respect to naming, many developers name a callable with a phrase starting with a verb when it does a certain task, with an adjective when it makes an inquiry, and with a noun when it is used to substitute variables. Some programmers suggest that a callable should perform exactly one task, and if it performs more than one task, it should be split up into multiple callables. They argue that callables are key components in software maintenance, and their roles in the program must remain distinct. Proponents of modular programming advocate that each callable should have minimal dependency on the rest of the codebase. For example, the use of global variables is generally deemed unwise, because it adds coupling between all callables that use the global variables. If such coupling is not necessary, they advise to refactor callables to accept passed parameters instead. Examples edit Early BASIC edit Early BASIC variants require each line to have a unique number (line number) that orders the lines for execution, provides no separation of the code that is callable, no mechanism for passing arguments or to return a value and all variables are global. It provides the command GOSUB where sub is short for sub procedure, subprocedure or subroutine. Control jumps to the specified line number and then continues on the next line on return. 10 REM A BASIC PROGRAM 20 GOSUB 100 30 GOTO 20 100 INPUT “GIVE ME A NUMBER”; N 110 PRINT “THE SQUARE ROOT OF”; N; 120 PRINT “IS”; SQRT(N) 130 RETURN This code repeatedly asks the user to enter a number and reports the square root of the value. Lines 100-130 are the callable. Small Basic edit In Microsoft Small Basic, targeted to the student first learning how to program in a text-based language, a callable unit is called a subroutine. The Sub keyword denotes the start of a subroutine and is followed by a name identifier. Subsequent lines are the body which ends with the EndSub keyword. [25] Sub SayHello TextWindow.WriteLine("Hello!") EndSub This can be called as SayHello(). [26] Visual Basic edit In later versions of Visual Basic (VB), including the latest product line and VB6, the term procedure is used for the callable unit concept. The keyword Sub is used to return no value and Function to return a value. When used in the context of a class, a procedure is a method. [27] Each parameter has a data type that can be specified, but if not, defaults to Object for later versions based on .NET and variant for VB6.[28] VB supports parameter passing conventions by value and by reference via the keywords ByVal and ByRef, respectively. Unless ByRef is specified, an argument is passed ByVal. Therefore, ByVal is rarely explicitly specified. For a simple type like a number these conventions are relatively clear. Passing ByRef allows the procedure to modify the passed variable whereas passing ByVal does not. For an object, semantics can confuse programmers since an object is always treated as a reference. Passing an object ByVal copies the reference; not the state of the object. The called procedure can modify the state of the object via its methods yet cannot modify the object reference of the actual parameter. Sub DoSomething() ' Some Code Here End Sub The does not return a value and has to be called stand-alone, like DoSomething Function GiveMeFive() as Integer GiveMeFive= 5 End Function This returns the value 5, and a call can be part of an expression like y = x + GiveMeFive() Sub AddTwo(ByRef intValue as Integer) intValue = intValue + 2 End Sub This has a side-effect – modifies the variable passed by reference and could be called for variable v like AddTwo(v). Giving v is 5 before the call, it will be 7 after. C and C++ edit In C and C++, a callable unit is called a function. A function definition starts with the name of the type of value that it returns or void to indicate that it does not return a value. This is followed by the function name, formal arguments in parentheses, and body lines in braces. In C++, a function declared in a class (as non-static) is called a member function or method. A function outside of a class can be called a free function to distinguish it from a member function. [29] void doSomething() { /* some code */ } This function does not return a value and is always called stand-alone, like doSomething() int giveMeFive() { return 5; } This function returns the integer value 5. The call can be stand-alone or in an expression like y = x + giveMeFive() void addTwo(int *pi) { *pi += 2; } This function has a side-effect – modifies the value passed by address to the input value plus 2. It could be called for variable v as addTwo(&v) where the ampersand (&) tells the compiler to pass the address of a variable. Giving v is 5 before the call, it will be 7 after. void addTwo(int& i) { i += 2; } This function requires C++ – would not compile as C. It has the same behavior as the preceding example but passes the actual parameter by reference rather than passing its address. A call such as addTwo(v) does not include an ampersand since the compiler handles passing by reference without syntax in the call. PL/I edit In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A (trivial) function to change the sign of each element of a two-dimensional array might look like: change_sign: procedure(array); declare array(*,*) float; array = -array; end change_sign; This could be called with various arrays as follows: /* first array bounds from -5 to +10 and 3 to 9 */ declare array1 (-5:10, 3:9)float; /* second array bounds from 1 to 16 and 1 to 16 */ declare array2 (16,16) float; call change_sign(array1); call change_sign(array2); Python edit In Python, the keyword def denotes the start of a function definition. The statements of the function body follow as indented on subsequent lines and end at the line that is indented the same as the first line or end of file.[30] def format_greeting(name): return "Welcome " + name def greet_martin(): print(format_greeting("Martin")) The first function returns greeting text that includes the name passed by the caller. The second function calls the first and is called like greet_martin() to write "Welcome Martin" to the console. Prolog edit In the procedural interpretation of logic programs, logical implications behave as goal-reduction procedures. A rule (or clause) of the form: A :- B which has the logical reading: A if B behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB. Consider, for example, the Prolog program: mother_child(elizabeth, charles). father_child(charles, william). father_child(charles, harry). parent_child(X, Y) :- mother_child(X, Y). parent_child(X, Y) :- father_child(X, Y). Notice that the motherhood function, X = mother(Y) is represented by a relation, as in a relational database. However, relations in Prolog function as callable units. For example, the procedure call ?- parent_child(X, charles) produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example: ?- parent_child(elizabeth, Y). Y = charles. ?- parent_child(X, Y). X = elizabeth, Y = charles. X = charles, Y = harry. X = charles, Y = william. ?- parent_child(william, harry). no. ?- parent_child(elizabeth, charles). yes. See also References Last edited 1 month ago by StarryGrandma Related articles Tail call Subroutine call performed as final action of a procedure Call stack Data structure used in computer programs Calling convention Ways subroutines get called in computers Wikipedia Wikimedia Foundation Powered by MediaWiki Content is available under CC BY-SA 4.0 unless otherwise noted. Privacy policy Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Terms of Use Desktop


Sign Up or Sign In
Forgot Password
eliminator