Is your high schooler sleep-deprived? Buckle up for bad news (LATimes Article)


New research finds that compared with high schoolers who typically get nine hours of sleep, those who get less shut-eye are more likely to drink and drive, text while driving, hop in a car driven by a driver who has consumed alcohol, and leave their seatbelts unbuckled.

But while dangerous behaviors escalated with less sleep, too much sleep also was linked to risk-taking in teens: Among those who routinely slept more than 10 hours per night, on average, researchers also noted higher rates of drinking and driving, infrequent seatbelt use, and riding with a driver who had consumed alcohol.

The National Sleep Foundation says that adolescents 14 to 17 years old should get eight to 10 hours of sleep per night. But a majority falls well short of that goal. Girls were less likely to get enough sleep than boys (71% versus 66.4%). And 75.7% of Asian students were most likely among the ethnicities surveyed to report insufficient sleep.

In a report released by the Centers for Disease Control and Prevention, researchers culled the survey responses of more than 50,000 teens in grades nine through 12 between 2007 and 2013. The teens were presented a range of risk-taking behaviors and asked whether they had engaged in any in the past 30 days. They were also asked about their average sleep duration and other health-related behaviors.
Among adolescents, two-thirds of all fatalities are related to traffic crashes. Sleepiness impairs a teen’s attention and reaction time behind the wheel, which is bad enough. But the authors of the new report suggest that chronic sleep shortage might also be linked to poor judgment or a “likelihood to disregard the negative consequences” of taking chances.

Compared with a teen getting the recommended nine hours of sleep nightly, a high schooler reporting six hours of sleep per night was 84% more likely to say he or she had driven after consuming alcohol in the past 30 days, 92% more likely to report infrequent seatbelt use in a car, and 42% more likely to acknowledge he or she had ridden in a car with a driver who consumed alcohol in the past month.
Teens who reported sleeping five hours or fewer per night were more than twice as likely as their well-rested peers to acknowledge drinking and driving and infrequent seatbelt use.

In the case of teens who sleep 10 hours or more per night, the researchers suggested that depression might be the best explanation for greater risk-taking.

Fewer than 30% of teens surveyed reported nightly sleep duration between eight and nine hours. Roughly 30% reported sleeping an average of seven hours nightly, with about 22% reporting six hours’ sleep nightly and 10.5% reporting five hours’. Only 1.8% of teens reported they slept 10 or more hours nightly.

Teens’ average propensities to engage in risky behavior were not reassuring: On average, 26% reported they had ridden in a car with a driver who had drunk alcohol at least once in the past 30 days; 30.3% reported they had texted while driving at least once in the past 30 days; 8.9% reported drinking and driving in the past 30 days, and 8.7% reported infrequent seatbelt use. Fully 86.1% reported they wore a bicycle helmet infrequently while riding a bike.

Follow me on Twitter @LATMelissaHealy and “like” Los Angeles Times Science & Health on Facebook.

Copyright © 2016, Los Angeles Times

High-tech Coolhunting

Many of the most important ideas in technology come from the fringes. How do we spot them in the early stages?

An idea is born somewhere relatively obscure (maybe in a garage somewhere), spreads to small communities of hardcore enthusiasts (like Kickstarter and Reddit), and sometime later takes the mainstream by surprise when it suddenly explodes into popularity.

The journey is familiar in the context of startups, but it applies to important ideas and technologies more broadly.

Credit: Jobs (2013)

One example is bitcoin, which began as an interesting whitepaper published in 2008 that was circulated among cryptography experts for a few years before coming to the attention of the mainstream startup community. Now, it’s a cryptocurrency — and blockchain protocol — sensation where activity is being tracked closely and community splits are being chronicled by newspapers around the world including the New York Times. (The first mention of bitcoin there was actually four years ago, in the context of the TV show “The Good Wife”).

There are countless reasons why you’d want to know about the next big idea in technology, and as early as possible. Whether you’re finding them, inventing them, or building businesses based on them, ideas matter, as does “the idea maze” one travels to get to them. But is there a way to catch these ideas as they emerge, in their very early stages?It’s difficult, because the places where these sleeper trends begin are seemingly random and obscure. This is tautological in a way, because if something exciting comes out of an established tech center (like Stanford or MIT), the mainstream will pay attention very quickly. It’s only ideas that are significant and come from outsiders that take longer to surface and be understood.

Spotting these ideas has an element of serendipity and luck to it, but there are some things we can all do to improve our chances of finding an important trend before it hits the mainstream. The techniques aren’t different from what 1990s “coolhunters” or media and marketing trendspotters do to find pop culture trends early: It’s one part looking in the right places and cultivating the right sources, and one part noticing anomalies and acting quickly.

Where new ideas come from

The places where sleeper trends begin are by definition unpredictable, so it’s important to cast a broad net among interesting discussion groups orhobbyist communities — virtual or physical — that seem like good incubators for new ideas.

The next step is to keep track of what these groups are doing by setting up streams of information about them — anything from subscribing to newsletters and discovering good blogs in that space, to attending meetups and conferences.

One of the best ways to stay informed is by building a network of “social gateways”, people who are well connected in the communities you want to watch but that are also far enough outside your usual network that you are hearing about new things. Then, when a particularly compelling idea surfaces, you will hear about it early.

Some communities are far more likely to produce winning ideas than others. In his classic work The Diffusion of Innovations, sociologist Everett Rogers describes the characteristics of so-called “early adopters” — people who are more likely to find and use new technology.

These people are usually open-minded and scientific in their mindset, and have time or money to spend on trying new things. Any group with these characteristics is a good place for technologies to germinate, which is perhaps why college campuses make great testbeds for not only spotting but trying out new products.

According to Rogers, the best groups of early adopters are extroverted and have lots of social ties, because the more connected they are, the faster new ideas spread through the group. This is why trends often start with young people in cities, rather than in sprawling suburban neighborhoods, even though the latter group may be just as willing to try out the same new things.

Highly connected groups can be either offline (densely populated cities or other clusters) or online (tight-knit online communities). A really good sign of such a highly connected group is one that has with a newly formed, makeshift online presence, like a dedicated Slack channel or fast-growing subreddit (r/nameofgrouportopic). That usually means the group is both new and highly connected.

The number of small groups where ideas could surface is too large to watch them all, so it may be preferable to look further along the path, where ideas collect — in communities that aren’t very big, but have outsized importance or influence. The places to watch aren’t so much “gatekeepers” or curators of culture, like influential editors, as they are tastemakers.

The ‘banana album’ (Source:

A good analogy is what happened with The Velvet Underground’s “banana album” (so called because of the pop art banana cover by Andy Warhol); while the album only sold 30,000 copies in its early years, the people who bought it were the kind of people who started bands. It ended up “influencing the influencers” despite being relatively unknown.

The tech equivalent of people-who-start-bands are programmers and developers, which is why sites like StackOverflow and Hacker News — where those groups congregate — are good places to watch for trends. If a tool or technology is especially popular with the best engineers, it’s worth watching those forums. The opinion of programmers often determines whether a technology gets built or not, or whether it comes through in startup recruiting or through open source development. It’s hard to imagine Linux being as successful as it has been — without the willingness of enthusiastic developers dedicating hours of their free time in the early years.

‘Live free or die’ Linux license plates (Source:

How to tell a trend from a fad

Once you’ve built a pipeline of promising groups and information sources, how do you decide which ones are worth paying attention to?

With breakout ideas in particular, the most important thing to notice are signs of rapid growth. If you have the data, anything over 5% weekly sustained growth is anomalous and worth paying attention to. The next best thing is to compare leading versus lagging indicators, because a big mismatch between the two is often a sign of rapid growth.

Lagging indicators are things like brand recognition, prestige, and perceived importance. Leading indicators are more intrinsic to the idea or product itself, like how much their users care about it, how much better it is than the alternatives, and the volume of positive chatter about it. An example of lagging indicators outweighing leading indicators could be a film like Avatar, which had lots of marketing spend behind it, but appears to have had relatively limited cultural impact.

When I came across early but rapidly growing trends, like bitcoin in 2011, or the Oculus Kickstarter, they felt incongruous. In both communities, users and developers were going crazy for the new technology, which seemed to be a real breakthrough. Both trends seemed way too important to be something nobody outside the niche seemed to care about. In other words, the leading indicators far outstripped the lagging ones.

The natural instinct for most people is to ignore or dismiss this feeling, but you can train yourself to pay attention to it. In these cases it’s important to act quickly, because if something’s growing really fast, it’ll be common knowledge quite soon.

How to spot an important trend

In the early days of a new idea, it’s often the case that nobody is paying for anything. One way of guessing at economic impact, is by looking at “proxy for demand” — how much people will pay for similar alternatives — and on the other side, looking at “proxy for supply” — how expensive something was to produce before.

But it’s still tricky, because it’s so hard to estimate the economic changes brought by a disruptive technology. It can be dangerous to dismiss or embrace trends solely for this reason.

Finally, while rapid exponential growth is a good sign, it’s not everything. Internet memes have huge growth as well. Most ideas that quickly attract a large audience are actually just fads, and it’s important to be able to pick out the important ones. One good sign is evidence of a “secret”, a real discovery, or some plausible reasoning why this idea couldn’t have manifested itself before now. With bitcoin, the secret is the technical breakthrough described in the bitcoin paper; where previous efforts at distributed trust and decentralized resourcing failed, it coupled bitcoin (incentives) with the blockchain protocol (distributed ledger) to solve those problems.

Of the fast growing, real trends, only a few have the potential to be really, dramatically, world-changing. Every year, there are only a few really important macro trends, and just a handful of them in computer science. It’s unlikely a tech trend will be significant unless it benefits from one of these larger shifts, though it can do so in an oblique way. For instance, ridesharing apps were an important trend, but they were only enabled more broadly and recently by the larger trend of smartphones everywhere.

How trends go mainstream

Having a breakthrough or a community of early adopters clearly isn’t enough. So how does one tell a trend — something that continues to spread — from a fad — something that flares briefly and dies? The key is it needs to spread beyond the group of early adopters to the rest of the world, and there needs to be a real pathway for this to happen.

Before the internet, this path was from metropolitan centers, through the suburbs to the rest of the population. For ideas that spread purely online, the path could be through a big aggregator site like Reddit. Another pattern is spread by institutional similarity: Facebook was able to easily spread from Harvard to clusters of students in other schools, because of the structural similarity of most universities to each other despite other differences they may have.

Another path is by latching on to a different fast-growing community. In the 1990s, one of the big marketing successes of Sprite was advertising to the hip-hop subculture before it became mainstream. This type of path is especially important in technology, as subcultures that go through massive exponential growth are common. Targeting fast-growing communities is a common strategy for startups who want to see their userbase grow. Mobile developers were a niche group in 2007, but are a large, mainstream developer community now — — yet the today’s community still retains many of the tastes and technology preferences of the old one. Coolhunters can take the same approach but in reverse, by looking at fast growing communities and seeing what they’re using.

Whatever the path from early adopters to the mainstream may be, there are some qualities that some early adopters have some qualities that help the idea spread. In the fashion world, social media marketers often target internet personalities who project an aspirational ideal, often by posting Instagram-style pictures of food, live events, etc.

In technology, the kind of person who others want to copy may fit a different profile — might be famous through open source, might be a prominent blogger — but must have the same kind of influence. The same principle can apply to groups; Python programmers could be more influential than Java programmers, for example.

A contrarian view

You can get quite far in spotting new ideas just by watching developers, people in major cities or schools, other early adopters and tastemakers… but that’s where everyone’s looking. Part of finding the right people and places to watch necessarily requires you to have an alternative but correct view of the world — to form hypotheses about what’s overrated and underrated.

Gaming has been a good example of an underrated community for the last few years — despite little prestige, it’s a surprisingly large and influential subculture, and gamers have been early adopters of ideas like livestreaming and VR.

An esports match (Source:

Why coolhunt?

Beyond any acquisitive value, finding tech trends also has broader applications. These ideas have huge, and often quite sudden, effects on the world, and it’s very difficult to tell in advance what they’ll be, or which industries they’ll affect the most. And while in an ideal world, futurists could rationally deduce what the next big trend will be- , the reality is that these systems are complex and more fluid, changes compound and build on each other, and there are lots of unknown unknowns to account for.

This means coolhunting can be a surprisingly good way to catch these monumental shifts, compared to traditional market research or deductive reasoning from experts. Not bad for a technique invented by ’90s fashion marketers!

What motivates students?

What can you learn when you reach out to 66,000 students to find out what motivates them?

For one, students who have a sense of purpose are 18 times as likely to be motivated to do their schoolwork as those who do not. Students who find their schoolwork engaging are 16 times as likely to be academically motivated as those who do not.

Russ Quaglia and the Qisa institute’s 2014 study on student voice analyzed student responses about Self Worth, Engagement, Purpose, Teacher Support, and Peer Support and cross-referenced the responses against Academic Motivation.

Below is a chart showing the increased likelihood of academic motivation based on students who feel they have the specific attribute. Thus, students who feel they have teacher support are 8 times as likely to be motivated to do their schoolwork than students who do not.

Academic Motivation

A second measurement determined the percent of students who did not have a particular attribute, shown in the table below:

Student Lacking

Thus, more than half of all students reported that they had little peer support for studying.

Thus, if a teacher wanted to increase the motivations of the most students, she would find interventions that would encourage students to support each other in studying.

Or, if a teacher wanted to have the greatest effect on specific students, she might help those students who needed it (about 15% of the students) find a purpose.

What are the interventions that have the largest effect?

While there may not be one right answer, it’s something that is worth discussing.

The mind of a student today

December 26, 2014 Below is an interesting visual I cam across through a tweet from We Are Teachers. The visual maps out some really intriguing facts about students today. These facts are based on different studies and surveys conducted mainly on US students. I went through this resource and devised this brief synopsis: Minority students attending US schools will make up a majority of all students…

Why Six Hours Of Sleep Is As Bad As None At All

Getting six hours of sleep a night simply isn’t enough for you to be your most productive. In fact, it’s just as bad as not sleeping at all.


Not getting enough sleep is detrimental to both your health and productivity. Yawn. We’ve heard it all before. But results from one study impress just how bad a cumulative lack of sleep can be on performance. Subjects in a lab-based sleep study who were allowed to get only six hours of sleep a night for two weeks straight functioned as poorly as those who were forced to stay awake for two days straight. The kicker is the people who slept six hours per night thought they were doing just fine.

This sleep deprivation study, published in the journal Sleep, took 48 adults and restricted their sleep to a maximum of four, six, or eight hours a night for two weeks; one unlucky subset was deprived of sleep for three days straight.

Subjects who got six hours of sleep a night for two weeks straight functioned as poorly as those who were forced to stay awake for two days straight.
During their time in the lab, the participants were tested every two hours (unless they were asleep, of course) on their cognitive performance as well as their reaction time. They also answered questions about their mood and any symptoms they were experiencing, basically, “How sleepy do you feel?”

As you can imagine, the subjects who were allowed to sleep eight hours per night had the highest performance on average. Subjects who got only four hours a night did worse each day. The group who got six hours of sleep seemed to be holding their own, until around day 10 of the study.

In the last few days of the experiment, the subjects who were restricted to a maximum of six hours of sleep per night showed cognitive performance that was as bad as the people who weren’t allowed to sleep at all. Getting only six hours of shut-eye was as bad as not sleeping for two days straight. The group who got only four hours of sleep each night performed just as poorly, but they hit their low sooner.

The six-hour sleep group didn’t rate their sleepiness as being all that bad, even as their cognitive performance was going downhill.
One of the most alarming results from the sleep study is that the six-hour sleep group didn’t rate their sleepiness as being all that bad, even as their cognitive performance was going downhill. The no-sleep group progressively rated their sleepiness level higher and higher. By the end of the experiment, their sleepiness had jumped by two levels. But the six-hour group only jumped one level. Those findings raise the question about how people cope when they get insufficient sleep, perhaps suggesting that they’re in denial (willful or otherwise) about their present state.

Complicating matters is the fact that people are terrible at knowing how much time they actually spend asleep.
According to the Behavioral Risk Factor Surveillance System survey, as reported by the CDC, more than 35% of Americans sleep less than seven hours in a typical day. That’s one out of every three people. However, those who suffer from sleep problems don’t accurately estimate how much they sleep each night.

If you think you sleep seven hours a night, as one out of every three Americans does, it’s entirely possible you’re only getting six.
Research from University of Chicago, for instance, shows that people are as likely to overestimate how much they sleep as underestimate it. Another sleep study published in Epidemiology, indicates people generally overestimate their nightly sleep by around 0.8 hours. The same study also estimates that for every hour beyond six that people sleep, they overestimate sleep by about half an hour. If you think you sleep seven hours a night, as one out of every three Americans does, it’s entirely possible you’re only getting six.

So no one knows how much or little they’re sleeping, and when they don’t sleep enough, they believe they’re doing better than they are.

Even just a little bit of sleep deprivation, in this case, six rather than eight hours of sleep across two weeks, accumulates to jaw-dropping results. Cumulative sleep deprivation isn’t a new concept by any means, but it’s rare to find research results that are so clear about the effects.

Figuring out how to get enough sleep, consistently, is a tough nut to crack. The same advice experts have batted around for decades is probably a good place to start: Have a consistent bedtime; don’t look at electronic screens at least 30 minutes before bed; limit alcohol intake (alcohol makes many people sleepy, but it can also decrease the quality and duration of sleep); and get enough exercise.

Other advice that you’ll hear less often, but which is equally valid, is to lose excess weight. Sleep apnea and obesity have a very high correlation, according to the National Sleep Foundation. What’s more, obese workers already suffer from more lost productive time than normal weight and overweight workers.

Other causes of sleep problems include physical, neurological, and psychological issues. Even stress and worry can negatively affect sleep. The CDC has called lack of sleep a health problem, and for good reason. Diet, exercise, mental health, and physical health all affect our ability to sleep, and in return, our ability to perform to our best.

Fixing bad sleep habits to get enough sleep is easier said than done. But if you’re functioning as if you hadn’t slept for two days straight, isn’t it worthwhile?

Jill Duffy is a writer covering technology and productivity. She is the author of Get Organized: How to Clean Up Your Messy Digital Life.

China’s education system leaves students woefully unprepared for the real world



Chinese kids are smart. The kids of Shanghai cleaners outperform those of British doctors and lawyers in math, and Shanghai’s richest students are about three academic years ahead of the developed-country average. Students in the 90th percentile in the US score below the average Shanghai student on a test given to 15 year-olds around the world (pdf).
But tests only tell you so much about Chinese students’ smarts, says Xiaodong Lin, a professor of cognitive studies at Columbia University’s Teachers College. When they come to university in the US, Chinese students tend to struggle with analytical writing, critical thinking, and communication with peers and professors, Lin wrote in the People’s Daily (link in Chinese), the official newspaper of China’s Communist Party.
“While Chinese education has focused more on mastery of knowledge, the American education seems to emphasize how to learn, even though we may not do as a good job as we wish,” she wrote.
Lin has taught college students in the US for 21 years, and told Quartz about she is constantly comparing her US and Chinese students. She has been the faculty adviser for the Chinese Student Association at Teachers College for 10 years, noting that she is “deeply in touch with the community.” She has surveyed other teachers about the differences between American and Chinese students, and writes about her findings frequently.

Global policymakers are equally obsessed with comparative student performance. The OECD gives 15-year-olds around the world a test every three years to track their progress in math, science, and reading; education ministers anxiously await the data and the praise or punishment that follows. But wherever they sit in the rankings, countries struggle to balance increased academic rigor against the reality that the modern workplace demands skills that aren’t reflected in tests.
Andreas Schleicher, head of the OECD’s education unit, told Quartz that the rest of the world can learn from the high standards that Chinese schools set for students, as well as the freedom teachers are given to adapt their methods to the subject matter. “They know how to teach,” he told Quartz. “It’s a science for them.” (Shanghai math teachers come to the UK every year to show off their talents.)
Lin agrees that this rigor is good—her Chinese students are better at deeper thinking than their American counterparts—but fostering independent thinking is also important. When she asked Chinese students why they were so quiet in class, the responses included statements like “my parents told me that I should not speak unless I have correct answers” and “I am afraid of speaking when my ideas are different from the class,” she wrote in the People’s Daily.
Other professors echoed Lin’s concerns. One from Northwestern University told her that that Chinese students work very hard but rarely produce original thoughts or ideas. “What they lack more is the ability to bring up viewpoints and justify them,” Lin wrote.

Design as Participation

You’re Not Stuck In Traffic You Are Traffic

This started with a drivetime conversation about contemporary design with Joi Ito. We were stuck in traffic, and in our conversation, a question emerged about designers: This new generation of designers that work with complex adaptive systems. Why are they so much more humble than their predecessors who designed, you know, stuff?
The answer is another question, a hypothesis. The hypothesis is that most designers that are deliberately working with complex adaptive systems cannot help but be humbled by them. Maybe those who really design systems-interacting-with-systems approach their relationships to said systems with the daunting complexity of influence, rather than the hubris of definition or control.
The designers of complex adaptive systems are not strictly designing systems themselves. They are hintingthose systems towards anticipated outcomes, from an array of existing interrelated systems. These are designers that do not understand themselves to be in the center of the system. Rather, they understand themselves to be participants, shaping the systems that interact with other forces, ideas, events and other designers. This essay is an exploration of what it means to participate.

Mies understood that the geometry of his building would be perfect until people got involved’

photo by Thomas Hawk. “Mies van der Rohe”. []
If in 2016 this seems intuitive, recall that it is at odds with the heroic sensibility – and role – of the modern designer. Or the Modernist designer, in any case, in whose shadows many designers continue to toil. On the pre-eminent Modernist architect Mies Van der Rohe (director of the Bauhaus, among other legendary distinctions), Andrew Dolkart wrote: [1]
Mies understood that the geometry of his building would be perfect until people got involved. Once people moved in, they would be putting ornamental things along the window sills, they would be hanging all different kinds of curtains, and it would destroy the geometry. So there are no window sills; there is no place for you to put plants on the window. He supplied every single office with curtains, and all the curtains are exactly the same. And he supplied every window with venetian blinds, and the blinds open all the way, or they close all the way, or they stop halfway—those are the only places you can stop them, because he did not want venetian blinds everywhere or blinds set at angles.
The circumstances that led to such a position and practice – and the legacies that emerge from it – could be summarized in the question I have asked in every architecture review I’ve participated in: if tv shows have viewers, and cars have drivers, and books have readers, what word do architects use for the people who dwell in the buildings they make?

The Birth of the User

I haven’t met an architect with an answer to that yet and this isn’t really about architecture. Really. But in the meantime – in stark relief to the absence of the architectural term – the internet provided a model so useful that it sweeps across viewers, drivers, passengers, writers, readers, listeners, students, customers… bending all of these into the expressions of the user.
It’s hard to say exactly when the user was born, but it might be Don Norman at Apple in 1993 (referenced by Peter Merholz[2]):
“I invented the term [User Experience] because I thought Human Interface and usability were too narrow: I wanted to cover all aspects of the person’s experience with a system, including industrial design, graphics, the interface, the physical interaction, and the manual.”
In the 23 years since then, users have become the unit of measurement for entrepreneurial success. Like all units of measurement, it has acquired barnacle-like derivatives like MAU (monthly average users) and ARPU (average revenue per user.) If something has more users, it’s more successful than something with fewer users. If a user spends more time with something, it’s better than something they spend less time with.
To gain users – and to retain them – designers are drawing upon principles also set forth by Don Norman, in his 1986 “The Psychology of Everyday Things.” In the book, Norman proposes “User Centered Design” (UCD) which is still in active and successful use 20 years later by some of the largest global design consultancies.
Broadly, UCD optimizes around engagement with the needs, desires and shortcomings of the user (in stark opposition to, say, Mies van der Rohe) and explores design from the analysis and insight into what the User might need or want to do. Simply, it moves the center from the designer’s imagination of the system to the designer’s imagination of the user of the system.
Joe and Josephine, Henry Dreyfuss Associates 1974 (MIT Press) — you’ve never met them, but if you’re seated, you’re basically sitting in their chair.
In 2016, it’s nearly impossible to imagine a pre-user Miesian worldview generating anything successful. Placing human activity at the center of the design process – as opposed to a set of behaviors that must be controlled or accommodated – has become an instinctive and mandatory process. Aspects of this pre-date Norman’s “user,” e.g., Henry Dreyfuss’ “Joe and Josephine” (above) for whom all his products were designed. But where Joe and Josephine had anatomy, users have behavior, intention, desire.
It’s not the technical capacities of the internet; without UCD, Amazon couldn’t have put bookstores out of business, “ride-hailing” services couldn’t have broken the taxi industries in cities where they roll out, and digital music would never have broken the historical pricing and distribution practices of the record labels. Designers are appropriately proud of their roles in these disruptions; their insights into user desire and behavior are what made them possible.
But as designers construct these systems, what of the systems that interact with those systems? What about systems of local commerce and the civic engagement that is predicated upon it? Or the systems of unions that emerged after generations of labor struggles? Or the systems that provided compensation for some reasonable number of artists? When designers center around the user, where do the needs and desires of the other actors in the system go? The lens of the user obscures the view of the ecosystems it affects.
Robin Sloan recently addressed this in a post [3] about “Uber for food” startups like Sprig.
“[T]here’s more to any cafeteria than the serving line, and Sprig’s app offers no photograph of that other part. This is the Amazon move: absolute obfuscation of labor and logistics behind a friendly buy button. The experience for a Sprig customer is super convenient, almost magical; the experience for a chef or courier…? We don’t know. We don’t get to know. We’re just here to press the button.”
For users, this is what it means to be at the center: to be unaware of anything outside it. User-Centric Design means obscuring more than it surfaces. Sloan continues:
“I feel bad, truly, for Amazon and Sprig and their many peers—SpoonRocket, Postmates, Munchery, and the rest. They build these complicated systems and then they have to hide them, because the way they treat humans is at best mildly depressing and at worst burn-it-down dystopian.”
I have no idea what’s going on here but this is what I’m trying to say.
The user made perfect sense in the context in which it was originally defined: Human-Computer Interaction. UCD emphasized the practical and experiential aspects of the person at the keyboard, as opposed to the complex code and engineering behind it.
But we are no longer just using computers. We are using computers to use the world. The obscured and complex code and engineering now engages with people, resources, civics, communities and ecosystems.Should designers continue to privilege users above all others in the system? What would it mean to design for participants instead? For all the participants?

Designing for Participation.

Designing for participation is different than designing for use, in any case. Within architecture – which I refer to again precisely because participation is not native to the discipline – the idea emerged with increasing frequency as surfaces and materials took on greater dynamism. But perhaps the quintessential historical example is Cedric Price, who was working long before that dynamism was practical.

Cedric Price’s ‘Fun Palace’ (1961) and if you’ve ever been to the Pompidou Center in Paris, you’re looking at what happens when this idea puts on a suit and gets a job.

photo by Anil Bawa-Cavia. “Fun Palace”. []
Price is well known for two projects: Fun Palace (1961) and Generator (1976) and though neither one was ever built, their genes can be isolated in the Centre George Pompidou and the so-called “smart home.” The Fun Palace (drawing, above) writes Stanley Matthews [4],
“…would challenge the very definition of architecture, for it was not even a conventional ‘building’ at all, but rather a kind of scaffold or framework, enclosing a socially interactive machine – a virtual architecture merging art and technology. In a sense, it was the realization of the long unfulfilled promise of Le Corbusier’s claims of a technologically informed architecture and the ‘machine for living’. It was not a museum, nor a school, theatre, or funfair, and yet it could be all of these things simultaneously or at different times. The Fun Palace was an environment continually interacting and responding to people.”
Designed in 1961, the Fun Palace in free exchange with many contemporaneous ideas, cybernetics not least of all. The Fun Palace was, writes Matthews, “like a swarm or meteorological system, its behaviour would be unstable, indeterminate, and unknowable in advance.”
This was wholly in line with the early cyberneticists like Gordon Pask (who noted in 1972, “now we’ve got the notion of a machine with an underspecified goal, the system that evolves…”)[5] But Price’s architecture was more than contemporary to cybernetics: it was infected by them. Pask himself organized the “Fun Palace Cybernetics Subcommittee.”
The Fun Palace was obviously quite radical as architecture, but far beyond its radical architectonic form (some of which was adopted by the Pompidou Center) was its more provocative proposal that the essential role for its designer was to create a context for participation.
This returns to the drivetime question about the designers of complex adaptive systems: Price was designing not for the uses he wished to see, but for all the uses he couldn’t imagine. This demands the ability to engage with the people in the building as participants, to see their desires and fears, and then to build contexts to address them. But it wasn’t strictly about interaction with the building; it was a fundamentally social engagement. As opposed to the “user” of a building who is interacting with a smart thermostat, the participants in a building are engaged with one another.
The social systems, however, are only one of many complex systems within which the Fun Palace is expressed. It stood outside any context of urban planning, or really any interaction with a broader system-based context (in which it is only a building, as opposed to a whole world.) It was designed for participants, but it denied that the building was participating in complex adaptive systems that were far greater than itself.
as best I know, this is pretty much what Cedric Price wanted to see happening in the Fun Palace
The Living/David Benjamin. “Hy-fi”. []
When the methodologies of design and science infect one another, however, design is not just a framework for participants, but something that is also, itself, participating. In the 2015 Hy-fi, a project for MoMA/PS1 by The Living (David Benjamin, above), it’s possible to see the various systems in active play. Analogous to Price’s Fun Palace, Hy-fi is a framework for participation, rather than a series of prescriptive uses.[6]
Hy-fi, however, is much more than the Price-like sensibilities that emphasize adaptability and context over structure and use. The materials used in Hy-fi are an innovative 100% organic material, manufactured from discarded corn stalks and bespoke “living root-like structures from mushrooms.” David Benjamin’s design of this material is inextricable from his design of the building. Hy-fi sits at one intersection between building and growing, rendering it as close to zero-carbon-emission development as anything we’ll find in New York City.

growing a building, 2015.

Ecovative. “Timelapse of Myco Foam bricks growing for Hy-Fi”. []
It’s not as simple as a kindness towards the planet, though indeed, it’s a love letter to earth. Here is a building that is composted, instead of demolished. Hy-fi rethinks what the building is and does, relative to its participation with the complex adaptive systems around it. From the MoMA summary:[7]
“The structure temporarily diverts the natural carbon cycle to produce a building that grows out of nothing but earth and returns to nothing but earth—with almost no waste, no energy needs, and no carbon emissions. This approach offers a new vision for society’s approach to physical objects and the built environment. It also offers a new definition of local materials, and a direct relationship to New York State agriculture and innovation culture, New York City artists and non-profits, and Queens community gardens.”

composting a building, 2015

In other words, it’s not as simple as making sure that people are participating with the building (as Pask and Price conspired to do over 50 years ago.) Rather, the building is explicitly designed to participate in the built environment around it, as well as the natural environment beyond it, and further into local manufacturing, gardens and agriculture.
This is the designer working to highlight the active engagement with those systems. This is the alternative to the unexamined traditions of User-Centric Design, which renders these systems as either opaque or invisible.

Design as Participation.

To see this all the way through, designers can be reconsidered – in part through the various lenses of science – to become participants themselves.
Special participants, perhaps, but see above: the subject of the MoMA text is “the natural carbon cycle” that is diverted by the designer. The designer is one of many influences and directives in the system with their own hopes and plans. But mushrooms also have plans. The people who dance inside them have plans. And of course the natural carbon cycle has plans as well.
This recalls Ian Bogost’s take on Object Oriented Ontology (OOO), which he characterized succinctly in 2009[8]:
Ontology is the philosophical study of existence. Object-oriented ontology (“OOO” for short) puts things at the center of this study. Its proponents contend that nothing has special status, but that everything exists equally—plumbers, DVD players, cotton, bonobos, sandstone, and Harry Potter, for example. In particular, OOO rejects the claims that human experience rests at the center of philosophy, and that things can be understood by how they appear to us. In place of science alone, OOO uses speculation to characterize how objects exist and interact.
Some contemporary work suggests that we are not only designing for participation, but that design is a fundamentally participatory act, engaging systems that extend further than the constraints of individual (or even human) activity and imagination.
This is design as an activity that doesn’t place the designer or the user in the center.
Hans Haacke, ‘To the Population’ (Der Bevolkerung). Inside the Reichstag. This is Germany.
Hans Haacke’s 2000 monument in the re-united German Reichstag – To the People, der Bevoelkerung – requested all the members of the German Parliament to collect soil from their various local regions, and deposit the dirt untouched, within the monument. What grows must be nurtured, collectively designated as the federal representation of Germany … on into the future, growing year by year. There are no brick-like constraints, as in Hy-fi. There is only a structural context for the complex – and wholly unpredictable – interaction of soil, seeds, water, and sunlight. Germany.
Maria Theresa Alves, ‘Seeds of Change’ ballast garden in Bristol. This is Bristol, which is to say: this is everywhere that Bristol went.
More recently, the Brazilian artist Maria Theresa Alves worked in Bristol England to identify “ballast seeds”: the seeds that were inadvertent stowaways in the colonial period, when sailors would load rocks for ballast in their ships. The rocks came from wherever they happened to land, to stabilize them on their way to wherever they were gong. In “Seeds of Change” (2015) she nurtured the reverse-colonizers of Bristol: marigolds from the Mediterranean, tassel flowers from the New World. These arrived quietly below the water line, silent migrants from centuries ago.
Alves happens to have started Brazil’s Green Party, which situates the work in a broader practice of participation. But in Bristol, she surfaces the complex systems that lie below deck, systems that are derivative effects of commerce, colonialism, and the dynamics of life at sea. It’s humbling to wander inside it, a reminder that it’s not always obvious who exactly colonizes whom.
The final work here is by the art and design collective Futurefarmers, started by Amy Franceschini in 1995. Famous to some for designing the logo for Twitter – itself an exercise in representing participatory engagement – much of their work centers around building infrastructure for participation. Some of the participation is between people, but much of it is with the complex natural systems that surround us. Their recent project “Flatbread Society: Land Grant 2014” is described by the Broad Art Museum[9] as:
“… a project that brings together farmers, oven builders, astronomers, artists, soil scientists, bakers, anthropologists, and others who share an interest in humankind’s long and complex relationship with grain.”
The work includes a flexible space for discussion and interaction (modeled after the trading floor of the Chicago grain exchanges) but more importantly, it also includes seeds that Futurefarmers have gathered from around the world, grains thought to be either extinct or useless. Further, there’s an oven. The grains are baked into flatbread together with anyone who cares to learn.
In the Flatbread Society work, like the work of Haacke and Alves, human activity can clearly be understood as only one of the systems that is in play. This is the inversion of User Centric Design. Rather than placing the human at the center of the work, the systems that surround us – systems we depend on – take the appropriate center stage in their complexity, mystery, in their unpredictability.

You’re Not Stuck In Traffic You Are Traffic

Small detail from Chris Burden’s ‘Metropolis II’ at LACMA. Every artist’s landscape captures a place, and a precise moment in time. This is America, and this precise moment is the 20th century.
This started with a drivetime conversation about contemporary design with Joi Ito. We were stuck in traffic.
At the time, I remember thinking about David Foster Wallace, his essay and commencement address entitled “This is Water,” [10] and how he appealed to the students he was addressing:
“…I can spend time in the end-of-the-day traffic jam being angry and disgusted at all the huge, stupid, lane-blocking SUV’s and Hummers and V-12 pickup trucks burning their wasteful, selfish, forty-gallon tanks of gas, and I can dwell on the fact that the patriotic or religious bumper stickers always seem to be on the biggest, most disgustingly selfish vehicles driven by the ugliest, most inconsiderate and aggressive drivers, who are usually talking on cell phones as they cut people off in order to get just twenty stupid feet ahead in a traffic jam, and I can think about how our children’s children will despise us for wasting all the future’s fuel and probably screwing up the climate, and how spoiled and stupid and disgusting we all are, and how it all just sucks, and so on and so forth…
Look, if I choose to think this way, fine, lots of us do — except that thinking this way tends to be so easy and automatic it doesn’t have to be a choice. Thinking this way is my natural default-setting. It’s the automatic, unconscious way that I experience the boring, frustrating, crowded parts of adult life when I’m operating on the automatic, unconscious belief that I am the center of the world and that my immediate needs and feelings are what should determine the world’s priorities.”
There will always be designers to design the Hummers and the bumper stickers, and there will always be designers to design the web sites to propagate David Foster Wallace’s warnings and promises.
But a new generation of designers has emerged, concerned with designing strategies to subvert this “natural default-setting” in which each person understands themselves at the center of the world.
These designers do this by engaging with the complex adaptive systems that surround us, by revealing instead of obscuring, by building friction instead of hiding it, and by making clear that every one of us (designers included) are nothing more than participants in systems that have no center to begin with. These are designers of systems that participate – with us and with one another – systems that invite participation instead of demanding interaction.
We can build software to eat the world, or software to feed it. And if we are going to feed it, it will require a different approach to design, one which optimizes for a different type of growth, and one that draws upon –and rewards – the humility of the designers who participate within it.
(If you didn’t come here through the MIT Media Lab’s Journal of Design and Science, you may find it has deeper context there.
Many conversations led to this, most notably with Daisy Ginsberg as well as Kenyatta Cheese, Tricia Wang, Joe Riley, Karthik Dinakar, Joi Ito, and other friends, colleagues, participants.)
Add to Comment


[1]photo by Anil Bawa-Cavia. “Fun Palace”. []
[2]Usman Haque. “The Architectural Relevance of Gordon Pask”. Architectural Design. Vol. 77. (2007): Num. 4. 54.[] original quote in Mary Catherine Bateson Our Own Metaphor: A Personal Account of a Conference on the Effects of Conscious Purpose on Human Adaptation. New York : Alfred A Knopf
[3]Ecovative. “Timelapse of Myco Foam bricks growing for Hy-Fi”. []
[5]photo by Thomas Hawk. “Mies van der Rohe”. []
[6]Ian Bogost. “What is Object-Oriented Ontology? A definition for ordinary folk”. (2009):[]
[8]David Foster Wallace. “This is Water”. (2005): []
[9]Andrew Dolkart. “The Skyscraper City”. The Architecture and Development of New York City. (2003):[]
[10]“The Land Grant: Flatbread Society”. Broad Art Museum MSU, []
[11]Peter Merholz. “Whither “User Experience”?”. (1998): []
[13]The Living/David Benjamin. “Hy-fi”. []
[14]Robin Sloan. “Why I Quit Ordering From Uber-for-Food Start-Ups”. The Atlantic.[]
[15]Stanley Mathews. ” The Fun Palace: Cedric Price’s experiment in architecture and technology”. Technoetic Arts: A Journal of Speculative Research. Vol. 3. Intellect Ltd, (2005): Num. 2.[]
[16]The Living/David Benjamin. []

The Best of the Consumer Electronics Show 2016

Panasonic's transparent microLED display at CES 2016.

Above: Panasonic’s transparent microLED display at CES 2016.

Image Credit: Dean Takahashi
I’ve returned from the biggest battleground of tech, the Consumer Electronics Show in Las Vegas.
My Intel Basis Peak smartwatch told me that, over four days at CES, I walked 73,376 steps, or 18,344 steps per day. Those steps felt heavier this year because I carried a shoulder bag instead of using a roller bag, per the new security rules at the event. On the plus side, I managed to come back without the nerd flu and without a blister like last year.
I did my best, but that means I still only covered a very small percentage of the 3,000-plus companies spread across 2.4 million square feet of exhibit space at CES. My eyes began to glaze over as I saw the enormous numbers of drones, augmented reality glasses, virtual reality headsets, robots, smart cars, fitness wearables, 3D printers, and smart appliance that were part of the Internet of Things (making everyday objects smart and connected). I have published 63 stories about CES products and events. (I should say, I’ll continue to publish stories from CES over the next couple of weeks). I think this was my 20th CES, though I have lost count.
Inside the bubble of CES, which was attended by an estimated 150,000 people, I didn’t even know the stock market was melting down. CES is the place to look if we want to find the things that are going to save us from economic gloom, although we may have to really look. The global technology industry is expected to generate $950 billion in 2016, down 2 percent from a year ago, with the decline due in no small part to weakness in China. This year, I didn’t see much that was going to save the world economy and overcome the skepticism of natural-born cynics. You could certainly find partisans who will say that virtual reality or the Internet of Things will do that, as both movements have spread well beyond just one or two companies. But it’s a reach to say that these categories have already given us their killer apps.
Sill, I had a lot of fun finding things that I liked, and there was no shortage of these. Without further ado, here’s my favorite technology from CES 2016:
Panasonic Transparent Display
The idea of a transparent display isn’t that new. Big tech companies have been targeting them at retailers for a while. But this week Panasonic showed off a 55-inch television for the living room. The display is embedded in a bookcase, where it can transparently show a kind of trophy case behind the glass. But then it turns to black and shows home portraits. The image swivels to reveal a personalized screen with a weather report or a screen displaying a liquid-like aquarium. And it can even show a television show. The display has micro light-emitting diodes. While the screen is limited, as it isn’t completely transparent, it can display at a resolution of 1080p. This was a glimpse of the future, much like Panasonic’s Magic Mirror from a year ago. And I thought it was a wonderful example of how to make technology blend into the environment of the home.
Jim Margraff, CEO of Eyefluence, wears an Oculus Rift headset.
Above: Jim Marggraff, CEO of Eyefluence, wears an Oculus Rift headset.
Image Credit: Dean Takahashi
Eyefluence was the shortest demo I did at CES, but it was enough to show me the future of using your eyes to control things. The tiny Eyefluence sensors are attached to the inside of an Oculus Rift virtual reality headset and detect the smallest movements in your eyes. I blinked, turned my head, and moved my eyes around, but Eyefluence could still track when and how I wanted to control something. I could navigate through a menu without using my hands, a keyboard, or a mouse. It was fast. It only takes about a minute to learn how to follow Eyefluence’s instructions, after which you can start controlling things that are before your eyeballs. This could very well supply a major ingredient missing from virtual reality headsets and augmented reality glasses.
Vayyar’s 3D sensing
Israeli startup Vayyar uses 3D imaging with radio waves to see through solid surfaces. It can be used to show a 3D model of a cancerous growth in a woman’s breast. It can be used to detect the heartbeat of a person, such as a sleeping baby, in another room. Or it can be used to find studs or pipes that are hidden in a wall. It can see through materials, objects, and liquids. Vayyar can also detect motion and track multiple people in large areas. It works by shooting a radio wave into a solid object and measuring all of the ways that the wave bounces around as it hits various objects. Vayyar collects the reflections and analyzes them, putting them back together as a 3D image in real time. While it is powerful, the amazing technology doesn’t use a lot of power. It comes from seasoned technologists Raviv Melamed, Miri Ratner, and Naftali Chayat, who were inspired by military technology. Melamed, formerly of Intel, told us that the technology is inexpensive. And yes, if you have the ability to see through things, you’re Superman.
ODG’s ultra-wide wide-angle augmented reality glasses
Dean Takahashi demos ODG's augmented reality glasses.
Above: Dean Takahashi demos ODG’s augmented reality glasses.
Image Credit: Dean Takahashi
The Osterhout Design Group has taken its technology for night-vision goggles and turned it into augmented reality headsets for government and enterprises. The newest R-7 headset is like looking at a 65-inch TV screen that’s right in front of your eyeballs. The company demoed a future-generation technology with ultra wide-angle viewing. The R-7 has a 30-degree field of view, but the future product has a 50-degree field of view with a 22:9 aspect ratio. It’s more like sitting in the best seat in an IMAX theater, said Nima Shams, vice president at ODG. I was able to look at it and see a wide Martian landscape. The glasses are packed with technology, from Wi-Fi and Bluetooth radios to gyroscopes and altitude sensors. The R-7 costs $2,750, but there’s no telling how much the wide-angle display will be. At some point in the future, I fully expect that his experience is going to be better than going to an IMAX theater.
Cypress’s energy-harvesting solar beacon
This solar-based Bluetooth energy beacon doesn't need a battery.
Above: This solar-based Bluetooth energy beacon doesn’t need a battery.
Image Credit: Cypress
Beacons are devices that can connect to your smartphone using a local Bluetooth network. Retailers like to use them to send special offers to your smartphone. That technique can target people walking by a specific store and get them to come inside. But Beacons often run out of battery. By combining technology from Spansion (which Cypress Semiconductor has acquired) and Cypress, the product designers can create a Beacon with a solar energy array. Using that technology, the device can generate its own electricity and doesn’t need a battery. You can embed this kind of technology in any device that is part of the Internet of Things (smart and connected everyday objects). You could put a Beacon in a cemetery and use it to send a story about the life of someone buried there. “We want the Internet of Things, but nobody wants to change 20 billion batteries,” said Eran Sandhaus, vice president at Cypress Semiconductor. Hundreds of potential advertisers are looking at it. We’ll definitely need new sources of power, whether kinetic or otherwise. This is how the Internet of Things is going to become practical, with billions of smart, connected objects that operate on the slimmest amount of power.
Netatmo’s Presence smart outdoor security camera
Netatmo has a smart security camera.
Above: Netatmo has a smart security camera.
Image Credit: Netatmo
Presence is a smart outdoor security camera that sends an alert based on an analysis of a scene. If someone is loitering around your house, Netatmo’s Presence will detect that person and send a message to your smartphone. It can detect the movements of your pet, or it can tell you if someone is dropping a delivery at your door. You can train the camera to stay in a particular zone and, using deep learning technology, analyze only certain types of motion. It also comes with a floodlight. Presence doesn’t dump a ton of video on you. You don’t have to take an online storage subscription out. When it identifies significant events, it saves the video so that you can view it, preventing you having to view long, unedited footage. Presence will be available in the third quarter.
LG Rollable Display
LG's rollable display
Above: LG’s rollable display
Image Credit: LG
Rollable and flexible displays seem like either science fiction or a waste of time. But the LG rollable OLED screen is real. We can roll up the screen like a newspaper, and, in fact, that might be a good use of the technology. LG is showing a prototype now that is as thin as paper and has a resolution of 810 x 1200, or almost 1 million pixels. I’m not sure how we’ll end up using it. But I suspect the roller display will find many usages over time. This makes me feel like technology is becoming as disposable and flexible as a poster. You can go somewhere, put up a rollable screen, and then turn your surroundings into a movie theater or living room.
AtmosFlare 3D drawing
3D drawing is pretty cool. Adrian Amjadi of AtmosFlare showed me how to draw physical images in 3D, using the 3D drawing pen. The system uses ultraviolet light to cure a resin. You can pull on it and deform it any way you wish, essentially making something like the jellyfish in the video here. The resin sticks on porous things, but not on metal. The longer you leave the UV light on, the harder it becomes. The $30 system is on sale at Toys ‘R Us. The company says this will “forever change the way you do art.” I don’t know if it’s going to do that, but it did give me a small moment when I thought, “Wow, that’s cool.”
Medium painting and sculpting in Oculus Rift
Oculus VR came up with its “paint app” in September, but I finally got some hands-on time with it at CES. I was amazed at how easy it was to sculpt objects using two virtual hands (via the Oculus Touch hand controls and Oculus Rift headset). Expressing yourself with sculpting tools isn’t easy. But sculpting in the virtual space gave me a feeling of instant gratification. I started with a blank slate. Then I selected a tool for adding clay with one of my hands. I was able to change the way that the clay shot out of the Oculus Touch wand by rotating my hand. Then I was able to smooth out the edges, spray paint it, replicate it, and delete whole sections of it using my hands in the virtual world. It really makes you feel like you are sculpting something that is real. I can imagine it will be very easy to use a 3D printer to print out the 3D creations you build. You could certainly do something like this in a video game, like Media Molecule’s upcoming Dreams game on the PlayStation 4. But in VR, you feel like you are also inside the thing you are creating. You can turn the image to view it from new angles. This is one of those experiences that could make your head explode with creativity if you’re a 3D artist or sculptor.
Parrot Disco
Parrot Disco
Above: Parrot Disco
Image Credit: Parrot
Parrot has created a unique drone that can fly for 45 minutes on a single charge and reach speeds up to 50 miles per hour. The Parrot Disco is the Swiss company’s latest entry into one of tech’s fastest-growing markets. The Disco is a flying wing that has a motor. It can fly itself or follow instructions you give it via an app. The drone can also take off and land by itself, using its own autopilot. If you use the Parrot Skycontroller, you can get a first-person view on a tablet screen of everything the drone is seeing. You don’t need any training to fly the drone, which has a range of two kilometers, and can navigate its way back to you.

Bullish on Blended Learning Clusters

Michael Horn
An increasing number of regions are trying to create concentrated groups of blended-learning schools alongside education technology companies, which may be key to advancing the blended-learning field and increasing its odds of personalizing learning at scale to allow every child to be successful.

There is a theoretical underpinning for being bullish on the value these clusters could lend to the sector. These early attempts at building regional clusters mirror in many ways the clusters that Harvard professor Michael Porter has written about as having a powerful impact on the success of certain industries in certain geographies. Porter defines a cluster as a geographic concentration of interconnected companies and institutions in a particular field.

“Clusters promote both competition and cooperation,” Porter wrote in his classic Harvard Business Review article on the topic, “Clusters and the New Economics of Competition.” He goes on to note that vigorous competition is critical for a cluster to succeed, but that there must be lots of cooperation as well—“much of it vertical, involving companies in related industries and local institutions.”

The benefit of being geographically based, he writes, is that the proximity of the players and the repeated exchanges among them “fosters better coordination and trust.” The strength comes from the knowledge, relationships, and motivation that build up, which are local in nature. Indeed, new suppliers are likely to emerge within a cluster, he writes, because the “concentrated customer base” makes it easier for them to spot new market opportunities or challenges that players need help solving.

From wine and technology in California to the leather fashion industry in Italy and pharmaceuticals in New Jersey and Philadelphia, clusters have endured and been instrumental in advancing sectors even in a world where technology has reduced the importance of geography.

As Clayton Christensen has observed, clusters may be particularly important in more nascent fields—like blended learning—in which the ecosystem is still immature, performance has yet to overshoot its users’ performance demands, and how the different parts of the ecosystem fit together are still not well understood, and thus the ecosystem is highly interdependent, even as proprietary, vertically integrated firms do not—or in the case of education, often cannot—stretch across the entire value network. In this circumstance, having a cluster with organizations so close together competing and working together may be critical.


Perhaps the most promising blended-learning cluster is blossoming somewhat organically in Silicon Valley, where Silicon Schools Fund (where I’m a board member), the Rogers Family Foundation, and Startup Education are helping fund the creation of a critical mass of blended-learning schools and traditional venture capitalists alongside funders like Reach Capital, Owl Ventures, GSV, and Learn Capital and accelerators like ImagineK12 are helping seed an equally critical mass of education technology companies.

The NGLC Regional Funds for Breakthrough Schools, one of the supporters of the Rogers Family Foundation’s efforts in California, has funded similar regional efforts in New Orleans with New Schools for New Orleans; Washington, DC, with CityBridge Foundation; Colorado with the Colorado Education Initiative; Chicago with Leap Innovations; and New England with the New England Secondary School Consortium.

Student Question | Is Social Media Making Us More Narcissistic?








Are social media like Facebook turning us into narcissists? The Times online feature Room for Debate invites knowledgeable outside contributors to discuss questions like this one as well as news events and other timely issues.

Student Opinion – The Learning NetworkStudent Opinion – The Learning Network
Questions about issues in the news for students 13 and older.

Do you spend too much time trying to be attractive and interesting to others? Are you just a little too in love with your own Instagram feed?

An essay addressing those questions was chosen by two of our Student Council members this week. Angie Shen explains why she thinks it’s important:
As the generation who grew up with social media, a reflection on narcissism is of critical importance to teenagers. What are the psychological and ethical implications of constant engagement with or obsession over social media? How does it change our relationship with others and how we see ourselves?

“Narcissism Is Increasing. So You’re Not So Special.” begins:

My teenage son recently informed me that there is an Internet quiz to test oneself for narcissism. His friend had just taken it. “How did it turn out?” I asked. “He says he did great!” my son responded. “He got the maximum score!”

When I was a child, no one outside the mental health profession talked about narcissism; people were more concerned with inadequate self-esteem, which at the time was believed to lurk behind nearly every difficulty. Like so many excesses of the 1970s, the self-love cult spun out of control and is now rampaging through our culture like Godzilla through Tokyo.

A 2010 study in the journal Social Psychological and Personality Science found that the percentage of college students exhibiting narcissistic personality traits, based on their scores on the Narcissistic Personality Inventory, a widely used diagnostic test, has increased by more than half since the early 1980s, to 30 percent. In their book “Narcissism Epidemic,” the psychology professors Jean M. Twenge and W. Keith Campbell show that narcissism has increased as quickly as obesity has since the 1980s. Even our egos are getting fat.

It has even infected our political debate. Donald Trump? “Remarkably narcissistic,” the developmental psychologist Howard Gardner told Vanity Fair magazine. I can’t say whether Mr. Trump is or isn’t a narcissist. But I do dispute the assertion that if he is, it is somehow remarkable.

This is a costly problem. While full-blown narcissists often report high levels of personal satisfaction, they create havoc and misery around them. There is overwhelming evidence linking narcissism with lower honesty and raised aggression. It’s notable for Valentine’s Day that narcissists struggle to stay committed to romantic partners, in no small part because they consider themselves superior.

The full-blown narcissist might reply, “So what?” But narcissism isn’t an either-or characteristic. It’s more of a set of progressive symptoms (like alcoholism) than an identifiable state (like diabetes). Millions of Americans exhibit symptoms, but still have a conscience and a hunger for moral improvement. At the very least, they really don’t want to be terrible people.

Students: Read the entire article, then tell us …

— Do you recognize yourself or your friends or family in any of the descriptions in this article? Are you sometimes too fixated on collecting “likes” and thinking about how others see you?

— What’s the line between “healthy self-love” that “requires being fully alive at this moment, as opposed to being virtually alive while wondering what others think,” and unhealthy narcissism? How can you stay on the healthy side of the line?

— Did you take the test? What did it tell you about yourself?

Henry Xu, another Student Council member who recommended this article, suggests these questions:

— What about Instagram, Facebook, Snapchat and other social media feeds makes them so hard to put down?

— Do you think this writer’s proposal of a “social media fast” is a viable way to combat narcissism?

— For those who aren’t as attached to social media, do challenges from an overinflated sense of self still arise? If so, from where?

— If everyone is becoming more narcissistic, does that make narcissism necessarily a bad thing?

Want to think more about these questions? The Room for Debate blog’s forum Facebook and Narcissism can help.