Publish with Us

Follow Penguin

Follow Penguinsters

Follow Penguin Swadesh

Facebook: The Story Of How It Went Globally Social

Based on extensive research and insightful exploration, Steven Levy tells the inside story of how a driven tech nerd took a young company from a dorm room to the global arena where the power to change the world came with the inherent danger of such wide- ranging influence.

Mark Zuckerberg acknowledges this danger when he says, “The big lesson from the last few years is we were too idealistic and optimistic about the ways that people would use technology for good and didn’t think enough about the ways that people would abuse it”.

Here are 10 interesting things about this tech giant from Steven Levy’s Facebook: The Inside Story:

 

Mark Zuckerberg

The man behind the mission to connect the world is the poster boy of extraordinary success-

‘He’s the CEO of Facebook, the world’s largest social network— the world’s largest human network of any kind, ever— approaching 2 billion members, more than half of whom log in every day. It’s made him, in today’s reckoning, the sixth-richest person in the world.’

From the dorm room to being the rage on campus

In 2004, while the company was in its Harvard phase, Kirkland Suite H33 was regarded as Harvard’s own Silicon Valley.

Hour by hour, the impetus for students to sign up began to flip from engaging in a diverting pastime to an absolute necessity, as not being on Thefacebook made you a virtual exile on the physical campus.

The safe social space that brought students together

Thefacebook.com, as registered when it began its journey, promised privacy and safeguarded against misbehaviour.

Privacy was perhaps the defining characteristic of this new website. By limiting enrollment to those who had emails on the Harvard.edu domain, he made a safe space for students to share information they volunteered about themselves.’

The gambles that paid off

Open Reg and News Feed were considered high risk features but actually set the growth chart soaring for Facebook.

Open Reg allowed billions of users to flock to Facebook. And the News Feed   would keep them there, making the site as totally consuming for everybody as it was for college kids when Thefacebook first appeared.’

Facebook gave people a voice

The one thing that made Facebook hugely popular was that it offered people a platform from where their voice could be heard across social groups.

 “Not only is it freedom of speech, it’s giving people a platform to actually articulate how they feel and what they think and gain support from it and make it known, which you couldn’t do unless you were being interviewed on TV or by a reporter for a newspaper prior to this.”

When Zuckerberg toyed with the idea of selling Facebook

After mulling over a deal to sell out, Zuckerberg finally said ‘no sale’ to Yahoo! and employees got to stay with the ‘cool’ company.

Furthermore, going to Yahoo! would have meant the end of the dream as well as the end of a period of their lives that would never be matched: working like crazy on a project that millions of people loved while being involved in a daily geek spring break of office romances, video games, and gonzo coding binges.

The 5 guidelines that Facebook worked on

Internal guidelines laid out for company employees listed four points, the fifth being a Zuckerberg addition-

Focus on Impact.

Be Bold.

Move Fast and Break Things.

Be Open.

Zuckerberg liked those but insisted on a fifth: Build Social Value.

The African dream that went bust 

In 2016, a Facebook satellite built to extend internet coverage to distant parts of Africa was to be launched with Elon Musk’s SpaceX rocket-

It was then that Zuckerberg learned that the SpaceX rocket, the one carrying the satellite he had been gleefully touting as an Internet savior for the struggling continent, had blown up on the launchpad, a day before the scheduled blastoff.’

The “Facebook Effect” on American election

In 2016 the unthinkable happened. As Donald J. Trump took on the world as America’s President, fingers were pointed at Facebook-

In the weeks leading up to the election, there had been reports of so- called fake news, or misinformation intentionally spread through Facebook’s algorithms, being circulated widely on Facebook’s News Feed, which had become the major source of news for millions of users.

The breach of trust that opened floodgates of criticism

 The events of 2018 became a major setback for Facebook as privacy violations by the company made headlines across the world.

And the dam burst in 2018, when news came that Facebook had allowed personal information of up to 87 million users to end up in the hands of a company called Cambridge Analytica, which allegedly used the data to target vulnerable voters with misinformation.

 


 

In Facebook: The Inside Story, Steven Levy proves his mettle as the founding guru of technology journalism by drawing on his understanding of the dynamics of the Silicon Valley and integrating inputs from key players with interviews of more than three hundred Facebook employees past and present.

About this tech Goliath, Levy writes, “It is a company that both benefits from and struggles with the legacy of its origin, its hunger for growth, and its idealistic and terrifying mission. Its audaciousness— and that of its leader— led it to be so successful. And that same audaciousness came with a punishing price.”!

Moral Dilemmas and Networking: How Facebook Began

How much power and influence does Facebook have over our lives? How has it changed how we interact with one another? And what is next for the company – and us?

As the biggest social media network in the world, there’s no denying the power and omnipresence of Facebook in our daily life. And in light of recent controversies surrounding election-influencing “fake news” accounts, the handling of its users’ personal data, and growing discontent with the actions of its founder and CEO, never has the company been more central to the national conversation.

Award-winning tech reporter, Steven Levy presents a never-before-seen inside look into the making and building of the company. Find below an excerpt that gives you one of the many investor stories for Facebook in its early stages:

*

Moral Dilemma

In March 2005, Thefacebook finally moved into an office. Parker secured a second- floor space on Emerson Street in downtown Palo Alto, over a Chinese restaurant.

By then Zuckerberg had moved out of the Los Altos house. As the company was getting bigger it was less seemly that the CEO was bunking with the underlings. After crashing in different locations for a few months, Zuckerberg would move to a small apartment in downtown Palo Alto, a few blocks from the office. He had no TV, just a mattress on the floor and a few sticks of furniture. He was the CEO and biggest shareholder of a company with more than a million users and he still stacked his clothes on the floor.

In the first few weeks in the office, Thefacebook faced a financial crisis. Though it hadn’t yet spent all of Thiel’s angel money, the server bills and other costs were accumulating. The company still needed a new pot of cash, ideally coming from an investor who could act as an adviser to a CEO who had never even worked for a big company before, let alone run one. There would be no problem getting the money. But the choice of lead funder was fraught.

Zuckerberg had a strong preference for who he wanted to fill that role: Washington Post chairman and CEO Don Graham. Not a venture capitalist. Chris Ma, the father of one of Zuckerberg’s Kirkland House classmates, headed business development for the Post, and his daughter Olivia’s description of Thefacebook’s conquest of the college market intrigued him. In January 2005, Parker and Zuckerberg went to Washington, DC, to explore a business relationship. Ma invited Graham to the meeting, and the Post CEO listened in fascination as Zuckerberg described how Thefacebook worked. He wondered, though, whether privacy was an issue. Are people convinced that their posts will be seen only by those whom they want to see them? he asked.

People were indeed comfortable with sharing, Zuckerberg told him. A third of his users, he said, share their cell- phone numbers on their profile page. “That’s evidence that they trust us.”

Graham was startled at how emotionless and hesitant this kid was. At times, before he’d answer a question— even something that he must have been asked thousands of times, like what percentage of Harvard kids were on Thefacebook— he would fall silent, staring into the ether for thirty seconds or so. Does he not understand the question? Graham wondered. Did I offend him?

Nonetheless, before the meeting was over, Graham became convinced that Thefacebook was the best business idea he’d heard in years, and told Zuckerberg and Parker that if they wanted an investor who was not a VC, the Post would be interested.


Facebook: The Inside Story is crammed with insider interviews, never-before-reported reveals, anecdotes, and exclusive details about the company’s culture and leadership. In the process, the book explores how Facebook has changed our world and what the consequences will be for us all.

How Social Media Manipulates You

A Human’s Guide to Machine Intelligence by Kartik Hosanagar, is a relevant read in today’s world. Surrounded by technology in various devices, the book informs about how the algorithms and the artificial intelligence underlying such technologies robs us of our power to make decisions. From what we see in the form of news, to the products we purchase and where and what we eat, our daily life decisions and routines are now greatly influenced by the huge developments made in the technology sector. Thus, the author talks about many more potentially dangerous biases which could emerge and how we can keep it in check and control it.

Here are a few instances of how social media is slowly coming to dominate our real lives:

As various social media feeds and its layout are programmed on the basis of an algorithm, it is widely known to be a catalyst for encouraging fake news. This fake news then helps in propagating misinformation amongst people, making them move further away from real issues.

Social media has become such an intrinsic part of our lives that it has now evolved to control and hinder our daily routines. App notifications and the phenomenon of gamification, takes advantage of the human need for immediate gratification and be socially accepted, hampering certain habits such as, sleeping early, impairing one’s judgment to use their time in a better way, etc.

The algorithms operating in various social media also influence our choices. While purchasing an item, the recommendations provided are known to gently push a buyer into buying certain things.

Many social media platforms have their algorithms programmed so that the content one sees on it is personalized and filtered. Studying the pattern of the content which a user generally prefers, the algorithm makes decisions on what is to be shown to the user and what is to be left out.

Social media is also known to affect people’s moods and emotions. In a research conducted by Facebook in 2012, it was found that people posted more positive posts when they saw posts that had positive content on their feed, selected by their news-feed algorithm. The opposite is also true.

Many dating and socializing applications control the way one networks with people, as their algorithms look for people with similar interests or simply recommend a person to another solely based on the mutual friends they might have in common. This does away with the scope of connecting two people with differing interests, who might get along quite well too.

The capability of such media platforms to filter our preferences to such an extent of specification creates a “filter bubble” which leads to a high degree of polarization regarding aspects such as music or even political ideologies.

A Human’s Guide to Machine Intelligence is an entertaining and provocative look at one of the most important developments of our time

6 Reasons Why Digital Transformations Fail

Digital technology frees workers from tedious tasks, allowing them the opportunity to migrate to higher value-added responsibilities. As with any new powerful technology, there is indeed the potential for destructive applications. As with the prior three industrial revolutions, individuals and societies will be affected significantly, and companies will either transform or die.

Here’s a list of reasons why digital transformations fail:

  1. “Part of the issue is terminology. Most people don’t realize that digital disruption is the Fourth Industrial Revolution. The term “digital” is very broad.”

  2. “Transformation during industrial revolutions demands a different game plan than innovation within the current business model.”

  3. “True transformation must include building capabilities to stay ahead of your competition long term.”

  4. “For an industrial revolution – driven transformation to take off, you need a different, disciplined, new business model game plan.”

  5. “The transformation is incomplete if the new business model cannot be built with an eye toward perpetual evolution.”

  6. “The underlying cause of why 70 percent of digital transformations fail is a lack of sufficient discipline. There’s insufficient rigor in both digital transformation takeoff as well as in staying ahead.”


Using dozens of case studies and his own considerable experience, Tony Saldanha in his book, Why Digital Transformations Fail ,  shows how digital transformation can be made routinely successful, and instead of representing an existential threat, it will become the opportunity of a lifetime.

Know All about AI in ‘A Human’s Guide to Machine Intelligence’

Kartik Hosanagar’s  A Human’s Guide to Machine Intelligence is a phenomenal book that notes how algorithms and artificial intelligence are shaping our lives, and what can one do to stay in control. As they are embedded in every popular tech platform and every web enabled device, these algorithms and artificial intelligence carry out a plethora of functions for us, from choosing what products we buy to how we find a job.

Kartik Hosanangar through his book tries to explain how and why we need to arm ourselves with a better, deeper and a more nuanced understanding of the phenomenon of algorithmic thinking. He examines various episodes of such algorithms going rogue and why one needs to be more cautious while using such technology.

 

Here are some facts about AI from the book!

Match.com, one of the most popular dating website in the United States was launched in 1995 and aimed at finding the perfect partner for people. However, in 2011, a Financial Times reporter exposed that although the company’s algorithm asked people to list the characteristics they would want in an ideal partner, these lists were ignored. Rather, the people that the website urged the users to reach out to, was based on the profiles the users had visited previously.

“The conventional narrative is that algorithms will make faster and better decisions for all of us, leaving us with more time for family and leisure. But the reality isn’t so simple.”

 

 

The feature of autocomplete on Google, which was first introduced by Kevin Gibbs, is something that we now take for granted. There have been many instances where this feature has proved to reiterate the prejudices that are assumed regarding certain subjects.

“But it’s far more disturbing to ask if Google might have unintentionally led impressionable people who did not initially seek this information to webpages filled with biased and prejudiced commentaries, effectively delivering new audiences directly to hate-mongering sites.”

 

 

The algorithms used by Netflix, Amazon, and other online firms through collaborative filtering produce a biased range of shows or products that are popular, rather than promoting obscure and niche items. This is primarily because the algorithms of these online firms tend to recommend things based on what others are consuming.

“We developed simulations of several commonly used recommendation algorithms to test the theory, and they indeed demonstrated that these algorithms can create a rich-get-richer effect for popular items.”

 

 

Following the introduction of Google’s famously talked about ranking algorithm, which was made public in the year 1999, it resulted in various website owners creating “shadow” websites which would link back the users to their primary domain. Similarly, in the present age, Instagram and Twitter are working hard to minimise the presence of bot and spam accounts that are made to like and repost other accounts, thereby boosting the spammers’ rank on the platforms’ ranking algorithm.

 ∼

“And manipulability will only become an increasing concern as algorithms come to be used in other domains with more serious consequences. Suppose a fraudster knew exactly what rules credit card companies used to flag suspicious activity, or a terrorist knew exactly what TSA screening systems were looking for in their image-processing algorithms. With that knowledge, it would become easy to avoid detection.

 

 

Various social media websites such as Facebook, Twitter and also search engines such as Google have become a great source of information and news for people over a period of time. However, concerns over the use of personalization algorithms have come to grab the attention of many, as the algorithms of such tech companies access information about our preferences over time, creating a “filter bubble” which only shows things that relate to our preferences. This results in the barring of alternate perspectives.

“As we engineer our algorithmic systems, the algorithms themselves certainly deserve a high degree of scrutiny.”


A Human’s Guide to Machine Intelligence is an entertaining and provocative look at one of the most important developments of our time and is a practical user’s guide to this first wave of practical artificial intelligence.

 

 

 

 

 

 

Kartik Hosanagar on Aadhar and the AI Conundrum

Algorithms and the artificial intelligence that underlies them make a staggering number of everyday choices for us. In his book, A Human’s Guide to Machine Intelligence, Kartik Hosanagar draws on his own experiences designing algorithms professionally, as well as on examples from history, computer science, and psychology, to explore how algorithms work and why they occasionally go rogue, what drives our trust in them, and the many ramifications of algorithmic decision making.

He examines episodes like the fatal accidents of self-driving cars; Microsoft’s chatbot Tay, which was designed to converse on social media like a teenage girl, but instead turned sexist and racist; and even our own common, and often frustrating, experiences on services like Netflix and Amazon.

Here’s the author’s perspective on the application of AI for Aadhaar verification:


Artificial Intelligence (AI) is ushering in innovations around the world and India is no exception. Fashion retailer Myntra has rolled out AI-generated apparel designs as part of its Moda Rapido and Here and Now brands. Gurgaon-based GreyOrange Robotics is deploying robots to manage and automate warehouses. Companies like Flipkart are trying to roll out voice integration into its shopping experience. Modern AI thrives on data, and the grand-daddy of all relevant datasets in India might well be Aadhar, the world’s largest biometric identification system with 1.2 billion records about citizens.

AI has many applications of such data, including detecting bureaucratic corruption in the way government funds are disbursed to citizens as well as identifying tax fraud by citizens themselves. Fintech startups are exploring how to use data from Aadhar and the broader India Stack to make credit approval decisions and enable financial inclusion of individual citizens as well as SMEs.

While there are several potential benefits, there are undoubtedly many challenges with an initiative like Aadhar. The most obvious one relates to data security and privacy of citizens especially when the scope of Aadhar has expanded well beyond the original objective of plugging leaks in welfare schemes and now includes many more aspects of citizen’s social and financial lives. Aadhar has therefore been the subject of several rulings by the Supreme Court of India and the many nuances of this debate have previously been discussed in this outlet. But there will also be a new kind of challenge as we mine data and make more decisions using modern AI. It will be tempting to apply AI to many new areas including tax compliance, real estate, credit approvals, and more.

In the U.S., there have been documented instances of AI bias. One well-known example was the use of algorithms to compute risk scores for defendants in the criminal justice system. These scores are used to guide judges and parole officers in making sentencing and parole decisions respectively. An analysis in 2016 showed that the algorithms had a race bias, i.e. they were more likely to falsely predict future criminality in black defendants than white defendants. Similarly, there have been examples of gender biases in resume-screening algorithms and race biases in loan approval algorithms. There will be similar concerns in India, perhaps heightened by lax regulatory oversight by governments and poor compliance by firms. As AI systems make decisions about which loan applications to approve, will they be susceptible to humans’ gender, religious, and caste biases? Will algorithms used to catch criminal behaviour also share these prejudices? Might resume-screening algorithms also have preferred castes and communities like some human interviewers?

Given the many challenges posed by algorithmic decisions based on large-scale data about citizens, I do believe that India needs clear regulations on data privacy and automated decisions that corporations and governments can make based on such data. In the EU, GDPR gives consumers the right to access data that companies store about them, correct or delete such data, and even limit its use for automated decisions. It bans decisions based solely on the use of “sensitive data,” including data regarding race, politics, religion, gender, health, and more. It also includes a right to explanation with fully automated decisions. Essentially, it mandates that users be able to demand explanations behind the algorithmic decisions made for or about them, such as automated credit approval decisions. Many proposals for privacy protection in the U.S. use GDPR as a template; the California Consumer Privacy Act (CCPA), for example, is often referred to as GDPR-Lite.

As the Aadhar effort has succeeded in creating a large biometric identification programme, we will soon enter a new phase when companies and governments try to build a layer of intelligence on top of the data to drive automated decisions. It is time to create some checks and balances. In my book A Human’s Guide to Machine Intelligence, I have proposed an algorithmic bill of rights to protect citizens when algorithms are used to make socially consequential decisions. The purpose of these rights is to offer consumer protection at a time when computer algorithms make so many decisions for or about us. The key pillars behind this bill of rights are transparency, control, audits, and education. Transparency is about clarity in terms of inputs (what does the algorithm know about us), performance (how well does the algorithm work), and outputs (what kinds of decisions does it make). Another important pillar is user control. Algorithm designers should grant users a means to have some degree of control over how an algorithm makes decisions for them. It can be as simple as Facebook giving its users the power to flag a news post as potentially false; it can be as dramatic and significant as letting a passenger intervene when he is not satisfied with the choices a driverless car appears to be making. I have also proposed that companies have an audit process in place that evaluates algorithms beyond their technical merits and also considers socially important factors such as the fairness of automated decisions. Lastly, we need more informed and engaged citizens and consumers of automated decision-making systems. Only by assuming this responsibility can citizens make full use of the other rights I just outlined.

Together, this algorithmic bill of rights will help ensure that we can harness the efficiency and consistency of automated decisions without worrying about them violating social norms and ethics.


Kartik Hosanagar is the John C. Hower Professor at The Wharton School of The University of Pennsylvania where he studies technology and the digital economy. He is the author of A Human’s Guide to Machine Intelligence.

 

Delve into the Universe of Algorithms with Kartik Hosanagar

In his new book, A Human’s Guide to Machine Intelligence, Kartik Hosanagar surveys the brave new world of algorithmic decision making and reveals the potentially dangerous biases to which they can give rise as they increasingly run our lives. He makes the compelling case that we need to arm ourselves with a better, deeper, more nuanced understanding of the phenomenon of algorithmic thinking. The way to achieving that is understanding that algorithms often think a lot like their creators-that is, like you and me.

Here is what the author has to say about his journey towards writing the book!

Tell us what your book is about.

If you read the news, you have probably heard the term algorithms: computer code that seem to control much of what we do on the internet, and which are landing us in all sorts of jams. Elections are swayed by newsfeed algorithms, markets are manipulated by trading algorithms, women and minorities are discriminated against by resume screening algorithms — individuals are left at the mercy of machines. There is a lot of fear mongering and we hear terms such as “weapons of math destruction.” But a key question remains unanswered: what are we supposed to do about it? We can’t wish algorithms away – and, frankly, we wouldn’t want to. But they come with huge implications to our personal and professional lives that we need to understand if we’re going to attempt to offset the challenges they pose. This book offers us a way in.

Why did you write this book?

I spend my days helping students understand technology; designing and analyzing studies that probe algorithms’ impact on the world; and writing code myself. And while my subject gets a lot of attention in popular journalism, I feel the public lacks the right mental models to understand algorithms and AI, and as a result the conversation is too fear-oriented, at the expense of being solution-oriented. This is my attempt to address these problems and start a conversation on what the solution should look like.

The germ of the book itself began in my research lab. I was conducting a study I thought would confirm accepted notions that the Internet was democratizing taste and choice; in fact, it showed that commonly used algorithms did the opposite. That led me to work on how to design systems to achieve better social goals and business targets. We need to do something similar here, with this broader challenge – take a forward-looking view to solve the problems, not just worry or create fear about them.

Are algorithms too complex for most of us to understand?

They are not. Many of us are overwhelmed when we hear words like algorithms and AI. But they are concepts all of us can understand and, in fact, need to understand given their growing importance.

Today’s most sophisticated algorithms aren’t simple sets of instructions; they’re black boxes too technical for most of us to get our heads around. Even the regulators trained to monitor these things are years behind the AI that underlies modern algorithms. That’s what this book offers: a way in. In the course of trying to explain why code goes rogue, I came upon a novel insight that offers not only an understanding of algorithms, but points us towards a framework for controlling their power. I found that algorithmic behavior, like human behavior, can be influenced by both nature and nurture – in algorithms’ case, this means how they are coded by their programmers and the real-world data they soak up and learn from. In other words, algorithms go rogue for some of the same reasons humans do: they’re creative and unpredictable, they’re usually wonderful, sometimes dangerous.

This way of viewing algorithms helps us understand what causes algorithms to behave in biased and unpredictable ways, and in turn helps move us away from a fear-driven conversation towards practical solutions to these problems.

So, how concerned should we be that AI and algorithms have biases?

The biggest cause for concern is not that algorithms have biases; in fact, algorithms are on average less biased than humans. The issue is that we are more susceptible to biases in algorithms than in humans. First, despite our emerging skepticism, most people still see algorithms as rational, infallible machines, and thus fail to address and curb their (so-called) “bad behavior” quickly enough. So, elections are swayed, markets are manipulated, individuals are hurt due to our own attitudes and actions towards algorithms. Moreover, human biases and rogue behaviors don’t scale the way rogue software might. A bad judge or doctor can affect the lives of thousands of people; bad code can, and does, affect the lives of billions. So I finish the book by proposing concrete steps we can take towards a solution, including an algorithm bill of rights – a set of basic rights we should all demand and that regulators should provide us.

Is there anything I can do as an individual? Or am I at the mercy of large powerful tech companies?

As individuals, the power we have is knowledge, our dollars, and votes. I have four concrete steps individuals should follow.

  1. Be aware of when algorithms are making decisions a) for you and b) about you.
  2. Understand how this might affect the decisions being made. (Reading this book will help you with this!)
  3. Decide whether this is acceptable – to you as an individual, and as a member of society.
  4. If this is not acceptable, demand changes. Or walk away from algorithms that you think undermine the fabric of society.

What would this look like in a real-life example?

Suppose you are active on Facebook and discover news stories to read on the platform. You wonder if you are getting the full breadth of perspectives on an issue or even if the news and posts are false or manipulated in some way. You can do the following:

  1. First, remember that Facebook’s algorithm has essentially decided which of thousands of stories and posts to show you.
  2. Recall what you’ve learned in my pages: that Facebook’s algorithms choose the news stories from the ones posted by your friends and prioritize them based on which friends’ posts you engage with the most. If you want a breadth of perspectives, then don’t unfriend disengage with people with whom you disagree.
  3. If you find false information has been posted by someone, inform them. Also, with one click you can notify Facebook that the information is false. Facebook’s algorithms can now use your feedback to stop circulating false stories.
  4. Finally, demand transparency from Facebook on why certain posts are shown and others are not.

It works for other examples too where algorithms make decisions about us such as whether we get a mortgage loan or which school our kids can go to. If you’re unhappy with how you’re being treated, ask whether an algorithm was used and what factors were considered. Vote for politicians who will support some basic algorithmic rights for all of us: being informed when algorithms make decisions about us, and some simple way of understanding those algorithmic decisions.

What should we expect from the government and our elected representatives?

In the book, I call for a bill of rights that would make it much easier for individuals to follow the process I describe above; our elected representatives need to support this, as well as create channels for complaints – ways for individuals to flag problematic algorithmic decisions, and ways for the government to act. The EU has incorporated some relevant provisions in its GDPR regulation including a right to explanation, where consumers can demand explanations for algorithmic decisions that affect them. GDPR may not be the right solution for all governments but we need to think hard about how we grant people some basic rights regarding how algorithms make decisions about them.

I also believe in an idea first that was put forward by Ben Schneiderman, a professor of computer science at the University of Maryland. We need a national algorithmic safety board that would operate much like the Federal Reserve, staffed by experts and charged with monitoring and controlling the use of algorithms by corporations and other large organizations, including the government itself. The board’s goal will be to provide consumer protection and minimize and contain risks tied to algorithms.


Human’s Guide to Machine Intelligence is an entertaining and provocative look at one of the most important developments of our time

Know the New Age Man!

Atul Jalan’s book Where Will Man Take Us? gives insights into the effects that technology has on the current world. Exploring the advances in nanotechnology, artificial intelligence, quantum computing and genetics, the book also gives an incredible outlook on the future while also mapping pertinent questions of changes brought about in us – as a society and as a species, as a consequence. It also gives an intriguing perspective on how the technology today is rapidly altering the dynamics of human love, morality and ethics and wonders what’s in store for humankind in the next generation.

Here we give you a snippet of the new age man, as thought by Atul Jalan in this book:

 

  1. With the advent of Artificial Intelligence making our lives easier for us, the day is not far when AI would be so sophisticated that it would be able to run its own varied functions.

 

“We will, at some point soon, come to a stage where AI will become capable of recursive self-improvement”

 

  1. In the wake of swift technological developments and an abundance of machines dominating our lives, there could be a possibility of humans passing from the current forms into a higher form, as noted by William Reade. Further explaining this, Reade calls this theory the second act, as our present time is understood to be only a transitional phase from a human to a post-human era, which would be controlled by machines.

 

“Cosmologists believe that this future, this second act, could extend into billions of years. Machines might not need this planet and its atmosphere to survive and might be able to explore space extensively, as humans never could”

 

  1. The book lists a series of possibilities that could occur once the ASI (Artificial Super Intelligence) period comes into being. One of the most interesting outcomes of it would be the creation of a particular kind of technology which would result in distilling our consciousness through neural engineering and passing it on to a computer, thereby reinventing the concept of life after death!

 

“We might also soon be able to clone our body and then live eternally by moving from clone to clone. Imagine your body is like a smartphone and your consciousness is on the cloud”

 

  1. Technology has come to have a strong influence on people in the modern world, just as religion has had for years. Atul Jalan explains that the indomitable search for knowledge and advancements in technology has come to express just how important these advancements will prove to be even in the future.

 

“Much as socialism took over by promising salvation through social justice and electricity, so, in the coming decades, new techno-religions will take over—promising salvation through algorithms and genetics”

 

  1. Nanotechnology has proved to be another important discovery in the recent years. Scientists are working on brain-computer interfaces which could be used to augment abilities in a human.

 

“The progress that is being made on brain-computer interfaces verges on science-fiction. This means that soon you will be able to operate the computer with thought, much the same way our thoughts control our speech, movements and feelings”

 

  1. One of the best break-through in the field of nanotechnology has been the invention of nanobots. When released in our blood streams, these can unclog our arteries, repair organ-damage, and scientists are even speculating that they might even be able to reverse the ageing process in human body!

 

“But what will really make you sit up is the fact that eventually, they could soon even restore our DNA to how it was when we were in our twenties. This can turn fragile senior citizens into healthy young individuals overnight. In short, the promise of eternal youth”

 

 

In this book, Atul Jalan tackles nanotechnology, artificial intelligence, quantum computing and genetics, seamlessly weaving the future of technology with the changing dynamics of human love, morality and ethics.

Busted! 8 Myths about the Billion Internet Users that are you Need to Know

A digital anthropologist examines the online lives of millions of people in China, India, Brazil, and across the Middle East—home to most of the world’s internet users—and discovers that what they are doing is not what we imagine.

In The Next Billion Users, Payal Arora reveals habits of use bound to intrigue everyone seeking to reach the next billion internet users.

Read on to find out the 8 myths that get busted in this book:


Myth 1: Leisure is the prerogative of the elite and the poor don’t use the internet for frivolous purposes

There is a belief that digital life for the poor would be based in work and inherently utilitarian but that is not the case.

“When it becomes clear that leisure pursuits are what motivate people at the margins to embrace new media tools, will development agencies and grant organizations lose their own motivation to provide universal internet access…”

~

Myth 2: Old mass media has become redundant

“Because newspapers are unavailable in many villages in Namibia’s Ohagwena Region, mobile users circulate clips of newspaper articles on WhatsApp…Old technology seems to reinvent itself, offering new channels of expression and communication.”

~

Myth 3: Girls use mobile phones more than boys

“It was found that the girls used mobile phones far less often than the boys did. When asked why, the girls explained that their brothers monopolized the mobile phone. Also, as girls, and unlike their brothers, they had to do housework and had far less uninterrupted leisure time…”

~

Myth 4: Technology helps create a balance between labour and leisure. It liberates people from work

“…new technologies have had an adverse effect on leisure time, as people tend to be in a constant state of busyness with their mobile devices…White-collar workers can be trapped in a 24/7 world of labour if they are unable to switch off their digital devices.”

~

Myth 5: People don’t friend strangers due to privacy and safety issues

“ Teens who have grown up in a slum surrounded by their family, relatives, and neighbours, in highly constrained settings, are attracted to befriending people from another city or ,better yet, anyone who is foreign, not only because it widens their horizons but because it can enhance their social status among their friends.”

~

Myth 6: Text-only mobile versions are popular in households with low income and connectivity

“ Clearly, young people, regardless of their income or the region they live in, place high value on visual images…They are confidence builders, and they work particularly well for the vast number of semiliterate youth- enabling them to comfortably participate in this online world by sharing posts and expressing themselves in spite of their limited literacy. This is a key reason Facebook Zero, the text-only mobile version of Facebook…struggled to gain traction in low-income communities.”

~

Myth 7: Piracy is a problem that can be solved if people who pirate are punished

“…piracy is not a problem, not a crime, but instead a problem of pricing: what has made piracy ubiquitous is, quite simply, the media industry’s refusal to lower prices and its continuous neglect of the billions of low-income consumers in countries of the Global South, who simply want to be able to experience the pleasures provided by entertainment media that are so easily accessible for wealthier people.”

~

 

Myth 8: Corporates hate piracy

“The only way to find out what gets the attention of media consumers in this saturated content era is to watch piracy sites, because these are the favoured sites of the majority of the world’s consumers and reflect the great diversity in consumers’ tastes. If certain television shows, for example, are…downloaded by users from Mali to Mumbai, then producers cab more confidently invest in the global scaling of those media shows.”


The Next Billion Users is bound to intrigue everyone from casual internet users to developers of global digital platforms. AVAILABLE NOW!

error: Content is protected !!