Untangling your organization’s decision making

It’s the best and worst of times for decision makers. Swelling stockpiles of data, advanced analytics, and intelligent algorithms are providing organizations with powerful new inputs and methods for making all manner of decisions. Corporate leaders also are much more aware today than they were 20 years ago of the cognitive biases—anchoring, loss aversion, confirmation bias, and many more—that undermine decision making without our knowing it. Some have already created formal processes—checklists, devil’s advocates, competing analytic teams, and the like—to shake up the debate and create healthier decision-making dynamics.

Now for the bad news. In many large global companies, growing organizational complexity, anchored in strong product, functional, and regional axes, has clouded accountabilities. That means leaders are less able to delegate decisions cleanly, and the number of decision makers has risen. The reduced cost of communications brought on by the digital age has compounded matters by bringing more people into the flow via email, Slack, and internal knowledge-sharing platforms, without clarifying decision-making authority. The result is too many meetings and email threads with too little high-quality dialogue as executives ricochet between boredom and disengagement, paralysis, and anxiety (Exhibit 1). All this is a recipe for poor decisions: 72 percent of senior-executive respondents to a McKinsey survey said they thought bad strategic decisions either were about as frequent as good ones or were the prevailing norm in their organization.

Growing organizational complexity and proliferating digital communications are a recipe for poor decisions.

The ultimate solution for many organizations looking to untangle their decision making is to become flatter and more agile, with decision authority and accountability going hand in hand. High-flying technology companies such as Google and Spotify are frequently the poster children for this approach, but it has also been adapted by more traditional ones such as ING (for more, see our recent McKinsey Quarterly interview “ING’s agile transformation”). As we’ve described elsewhere, agile organization models get decision making into the right hands, are faster in reacting to (or anticipating) shifts in the business environment, and often become magnets for top talent, who prefer working at companies with fewer layers of management and greater empowerment.

As we’ve worked with organizations seeking to become more agile, we’ve found that it’s possible to accelerate the improvement of decision making through the simple steps of categorizing the type of decision that’s being made and tailoring your approach accordingly. In our work, we’ve observed four types of decisions (Exhibit 2):

The ABCDs of categorizing decisions.
  • Big-bet decisions. These infrequent and high-risk decisions have the potential to shape the future of the company.
  • Cross-cutting decisions. In these frequent and high-risk decisions, a series of small, interconnected decisions are made by different groups as part of a collaborative, end-to-end decision process.
  • Delegated decisions. These frequent and low-risk decisions are effectively handled by an individual or working team, with limited input from others.
  • Ad hoc decisions. The organization’s infrequent, low-stakes decisions are deliberately ignored in this article, in order to sharpen our focus on the other three areas, where organizational ambiguity is most likely to undermine decision-making effectiveness.

These decision categories often get overlooked, in our experience, because organizational complexity, murky accountabilities, and information overload have conspired to create messy decision-making processes in many companies. In this article, we’ll describe how to vary your decision-making methods according to the circumstances. We’ll also offer some tools that individuals can use to pinpoint problems in the moment and to take corrective action that should improve both the decision in question and, over time, the organization’s decision-making norms.

Before we begin, we should emphasize that even though the examples we describe focus on enterprise-level decisions, the application of this framework will depend on the reader’s perspective and location in the organization. For example, what might be a delegated decision for the enterprise as a whole could be a big-bet decision for an individual business unit. Regardless, any fundamental change in decision-making culture needs to involve the senior leaders in the organization or business unit. The top team will decide what decisions are big bets, where to appoint process leaders for cross-cutting decisions, and to whom to delegate. Senior executives also serve the critical functions of role-modeling a culture of collaboration and of making sure junior leaders take ownership of the delegated decisions.

Big bets

Bet-the-company decisions—from major acquisitions to game-changing capital investments—are inherently the most risky. Efforts to mitigate the impact of cognitive biases on decision making have, rightly, often focused on big bets. And that’s not the only special attention big bets need. In our experience, steps such as these are invaluable for big bets:

  • Appoint an executive sponsor. Each initiative should have a sponsor, who will work with a project lead to frame the important decisions for senior leaders to weigh in on—starting with a clear, one-sentence problem statement.
  • Break things down, and connect them up. Large, complex decisions often have multiple parts; you should explicitly break them down into bite-size chunks, with decision meetings at each stage. Big bets also frequently have interdependencies with other decisions. To avoid unintended consequences, step back to connect the dots.
  • Deploy a standard decision-making approach. The most important way to get big-bet decisions right is to have the right kind of interaction and discussion, including quality debate, competing scenarios, and devil’s advocates. Critical requirements are to create a clear agenda that focuses on debating the solution (instead of endlessly elaborating the problem), to require robust prework, and to assemble the right people, with diverse perspectives.
  • Move faster without losing commitment. Fast-but-good decision making also requires bringing the available facts to the table and committing to the outcome of the decision. Executives have to get comfortable living with imperfect data and being clear about what “good enough” looks like. Then, once a decision is made, they have to be willing to commit to it and take a gamble, even if they were opposed during the debate. Make sure, at the conclusion of every meeting, that it is clear who will communicate the decision and who owns the actions to begin carrying it out.

An example of a company that does much of this really well is a semiconductor company that believes so much in the importance of getting big bets right that it built a whole management system around decision making. The company never has more than one person accountable for decisions, and it has a standard set of facts that need to be brought into any meeting where a decision is to be made (such as a problem statement, recommendation, net present value, risks, and alternatives). If this information isn’t provided, then a discussion is not even entertained. The CEO leads by example, and to date, the company has a very good track record of investment performance and industry-changing moves.

It’s also important to develop tracking and feedback mechanisms to judge the success of decisions and, as needed, to course correct for both the decision and the decision-making process. One technique a regional energy provider uses is to create a one-page self-evaluation tool that allows each member of the team to assess how effectively decisions are being made and how well the team is adhering to its norms. Members of key decision-making bodies complete such evaluations at regular intervals (after every fifth or tenth meeting). Decision makers also agree, before leaving a meeting where a decision has been made, how they will track project success, and they set a follow-up date to review progress against expectations.

Big-bet decisions often are easy to recognize, but not always (Exhibit 3). Sometimes a series of decisions that might appear small in isolation represent a big bet when taken as a whole. A global technology company we know missed several opportunities that it could have seized through big-bet investments, because it was making technology-development decisions independently across each of its product lines, which reduced its ability to recognize far-reaching shifts in the industry. The solution can be as simple as a mechanism for periodically categorizing important decisions that are being made across the organization, looking for patterns, and then deciding whether it’s worthwhile to convene a big-bet-style process with executive sponsorship. None of this is possible, though, if companies aren’t in the habit of isolating major bets and paying them special attention.

A belated heads-up means you are not recognizing big bets.

Cross-cutting decisions

Far more frequent than big-bet decisions are cross-cutting ones—think pricing, sales, and operations planning processes or new-product launches—that demand input from a wide range of constituents. Collaborative efforts such as these are not actually single-point decisions, but instead comprise a series of decisions made over time by different groups as part of an end-to-end process. The challenge is not the decisions themselves but rather the choreography needed to bring multiple parties together to provide the right input, at the right time, without breeding bureaucracy that slows down the process and can diminish the decision quality. This is why the common advice to focus on “who has the decision” (or, “the D”) isn’t the right starting point; you should worry more about where the key points of collaboration and coordination are.

It’s easy to err by having too little or too much choreography. For an example of the former, consider the global pension fund that found itself in a major cash crunch because of uncoordinated decision making and limited transparency across its various business units. A perfect storm erupted when different business units’ decisions simultaneously increased the demand for cash while reducing its supply. In contrast, a specialty-chemicals company experienced the pain of excess choreography when it opened membership on each of its six governance committees to all senior leaders without clarifying the actual decision makers. All participants felt they had a right (and the need) to express an opinion on everything, even where they had little knowledge or expertise. The purpose of the meetings morphed into information sharing and unstructured debate, which stymied productive action (Exhibit 4).

Too many cooks get involved in the absence of processes for cross-cutting decisions.

Whichever end of the spectrum a company is on with cross-cutting decisions, the solution is likely to be similar: defining roles and decision rights along each step of the process. That’s what the specialty-chemicals company did. Similarly, the pension fund identified its CFO as the key decision maker in a host of cash-focused decisions, and then it mapped out the decision rights and steps in each of the contributing processes. For most companies seeking enhanced coordination, priorities include:

  • Map out the decision-making process, and then pressure-test it. Identify decisions that involve a cross-cutting group of leaders, and work with the stakeholders of each to agree on what the main steps in the process entail. Lay out a simple, plain-English playbook for the process to define the calendar, cadence, handoffs, and decisions. Too often, companies find themselves building complex process diagrams that are rarely read or used beyond the team that created them. Keep it simple.
  • Run water through the pipes. Then work through a set of real-life scenarios to pressure-test the system in collaboration with the people who will be running the process. We call this process “running water through the pipes,” because the first several times you do it, you will find where the “leaks” are. Then you can improve the process, train people to work within (and, when necessary, around) it, and confront, when the stakes are relatively low, leadership tensions or stresses in organizational dynamics.
  • Establish governance and decision-making bodies. Limit the number of decision-making bodies, and clarify for each its mandate, standing membership, roles (decision makers or critical “informers”), decision-making protocols, key points of collaboration, and standing agenda. Emphasize to the members that committees are not meetings but decision-making bodies, and they can make decisions outside of their standard meeting times. Encourage them to be flexible about when and where they make decisions, and to focus always on accelerating action.
  • Create shared objectives, metrics, and collaboration targets. These will help the persons involved feel responsible not just for their individual contributions in the process, but also for the process’s overall effectiveness. Team members should be encouraged to regularly seek improvements in the underlying process that is giving rise to their decisions.

Getting effective at cross-cutting decision making can be a great way to tackle other organizational problems, such as siloed working (Exhibit 5). Take, for example, a global finance company with a matrix of operations across markets and regions that struggled with cross-business-unit decision making. Product launches often cannibalized the products of other market groups. When the revenue shifts associated with one such decision caught the attention of senior management, company leaders formalized a new council for senior executives to come together and make several types of cross-cutting decisions, which yielded significant benefits.

When you are locked in silos, you are unlikely to collaborate effectively on cross-cutting decisions.

Delegated decisions

Delegated decisions are far narrower in scope than big-bet decisions or cross-cutting ones. They are frequent and relatively routine elements of day-to-day management, typically in areas such as hiring, marketing, and purchasing. The value at stake for delegated decisions is in the multiplier effect they can have because of the frequency of their occurrence across the organization. Placing the responsibility for these decisions in the hands of those closest to the work typically delivers faster, better, and more efficiently executed decisions, while also enhancing engagement and accountability at all levels of the organization.

In today’s world, there is the added complexity that many decisions (or parts of them) can be “delegated” to smart algorithms enabled by artificial intelligence. Identifying the parts of your decisions that can be entrusted to intelligent machines will speed up decisions and create greater consistency and transparency, but it requires setting clear thresholds for when those systems should escalate to a person, as well as being clear with people about how to leverage the tools effectively.

It’s essential to establish clarity around roles and responsibilities in order to craft a smooth-running system of delegated decision making (Exhibit 6). A renewable-energy company we know took this task seriously when undergoing a major reorganization that streamlined its senior management and drove decisions further down in the organization. The company developed a 30-minute “role card” conversation for each manager to have with his or her direct reports. As part of this conversation, managers explicitly laid out the decision rights and accountability metrics for each direct report. This approach allowed the company’s leaders to decentralize their decision making while also ensuring that accountability and transparency were in place. Such role clarity enables easier navigation, speeds up decision making, and makes it more customer focused. Companies may find it useful to take some of the following steps to reorganize decision-making power and establish transparency in their organization:

Drawn-out and complicated processes often mean more delegating is needed.
  • Delegate more decisions. To start delegating decisions today, make a list of the top 20 regularly occurring decisions. Take the first decision and ask three questions: (1) Is this a reversible decision? (2) Does one of my direct reports have the capability to make this decision? (3) Can I hold that person accountable for making the decision? If the answer to these questions is yes, then delegate the decision. Continue down your list of decisions until you are only making decisions for which there is one shot to get it right and you alone possess the capabilities or accountability. The role-modeling of senior leaders is invaluable, but they may be reluctant. Reassure them (and yourself) by creating transparency through good performance dashboards, scorecards, and key performance indicators (KPIs), and by linking metrics back to individual performance reviews.
  • Avoid overlap of decision rights. Doubling up decision responsibility across management levels or dimensions of the reporting matrix only leads to confusion and stalemates. Employees perform better when they have explicit authority and receive the necessary training to tackle problems on their own. Although it may feel awkward, leaders should be explicit with their teams about when decisions are being fully delegated and when the leaders want input but need to maintain final decision rights.
  • Establish a clear escalation path. Set thresholds for decisions that require approval (for example, spending above a certain amount), and lay out a specific protocol for the rare occasion when a decision must be kicked up the ladder. This helps mitigate risk and keeps things moving briskly.
  • Don’t let people abdicate. One of the key challenges in delegating decisions is actually getting people to take ownership of the decisions. People will often succumb to escalating decisions to avoid personal risk; leaders need to play a strong role in encouraging personal ownership, even (and especially) when a bad call is made.

This last point deserves elaboration: although greater efficiency comes with delegated decision making, companies can never completely eliminate mistakes, and it’s inevitable that a decision here or there will end badly. What executives must avoid in this situation is succumbing to the temptation to yank back control (Exhibit 7). One CEO at a Fortune 100 company learned this lesson the hard way. For many years, her company had worked under a decentralized decision-making framework where business-unit leaders could sign off on many large and small deals, including M&A. Financial underperformance and the looming risk of going out of business during a severe market downturn led the CEO to pull back control and centralize virtually all decision making. The result was better cost control at the expense of swift decision making. After several big M&A deals came and went because the organization was too slow to act, the CEO decided she had to decentralize decisions again. This time, she reinforced the decentralized system with greater leadership accountability and transparency.

Top-heavy processes often mean more delegating is needed.

Instead of pulling back decision power after a slipup, hold people accountable for the decision, and coach them to avoid repeating the misstep. Similarly, in all but the rarest of cases, leaders should resist weighing in on a decision kicked up to them during a logjam. From the start, senior leaders should collectively agree on escalation protocols and stick with them to create consistency throughout the organization. This means, when necessary, that leaders must vigilantly reinforce the structure by sending decisions back with clear guidance on where the leader expects the decision to be made and by whom. If signs of congestion or dysfunction appear, leaders should reexamine the decision-making structure to make sure alignment, processes, and accountability are optimally arranged.


None of this is rocket science. Indeed, the first decision-making step Peter Drucker advanced in “The effective decision,” a 1967 Harvard Business Review article, was “classifying the problem.” Yet we’re struck, again and again, by how few large organizations have simple systems in place to make sure decisions are categorized so that they can be made by the right people in the right way at the right time. Interestingly, Drucker’s classification system focused on how generic or exceptional the problem was, as opposed to questions about the decision’s magnitude, potential for delegation, or cross-cutting nature. That’s not because Drucker was blind to these issues; in other writing, he strongly advocated decentralizing and delegating decision making to the degree possible. We’d argue, though, that today’s organizational complexity and rapid-fire digital communications have created considerably more ambiguity about decision-making authority than was prevalent 50 years ago. Organizations haven’t kept up. That’s why the path to better decision making need not be long and complicated. It’s simply a matter of untangling the crossed web of accountability, one decision at a time.

By Aaron De Smet, Gerald Lackey, and Leigh M. Weiss

How Edtech Tools Evolve Introduction: We’ve Heard This Before

Introduction: We’ve Heard This Before

Great inventors have proclaimed technology’s potential to transform education before. In 1913, Thomas Edison asserted that “books will soon be obsolete in the public schools,” replaced by motion pictures. Nearly a century later, Steve Jobs, according to his biographer Walter Isaacson, believed print textbooks were “ripe for digital destruction.”

Not so fast. Over the decades, a parade of technologies—television, “teaching machines,” interactive whiteboards and desktop computers—seemed to have a far more muted impact on learning than futurists and entrepreneurs predicted. Even the trusty wood-pulp book still soldiers on: Roughly half of district IT leaders surveyed by the Consortium for School Networking believe that print materials will still be used regularlyby 2018.

“The pattern of hype leading to disappointment, leading to another cycle of overpromising with the next technology, has a long history to it,” notes Larry Cuban, an education professor at Stanford University who began his career as a high school history teacher in the 1950s.

And yet, puncturing this bleak scenario are shining examples of times when technology has made a difference. In North Carolina, educators at Mooresville Graded School District (hailed by The New York Times as the “de facto model of the digital school” in 2012) attribute a boost in test scores, attendance and graduation rates to the smart use of laptops and online software (earning itself the title). In rural Central California, Lindsay Unified School District’s ongoing efforts to refine its competency-based learning model has led to small bumps in test scores—but a dramatic drop in truancy, suspension and gang membership rates.

So what’s the difference? When can technology have a galvanizing effect, rather than amplify existing educational practices?

Kentaro Toyama, a professor at University of Michigan’s School of Information, has often observed the latter. How can new practices extend beyond just a single class or a hero teacher, but for a community, and on a sustained basis? What portion of the answer lies with technology—and what portion with how it’s used?

The pattern of hype leading to disappointment, leading to another cycle of overpromising with the next technology, has a long history to it.

—Larry Cuban, emeritus professor at Stanford University

This chapter of our year-long survey of the role of technology in education dives into technology’s contribution to that fragile equation. And arguably one of the most thoughtful perspectives on technology’s role in education comes from Ruben Puentedura, a former teacher and university media center director. His investigation into the role of technology in education in the late 1980s led to an observation that was simultaneously clear-eyed yet profound: Not every device or app can or should transform how teachers teach.

To wield technology well, Puentedura asserts, teachers must ask and answer: “What opportunities does new technology bring to the table that weren’t available before?” Puentedura codified his observations in a framework nicknamed “SAMR,” which offers an invaluable window into understanding the different ways that technology can support changes in instructional practices and learning outcomes.

Yet there is a non-negotiable requirement for technology to make a difference. It has to work without requiring herculean workarounds.

Sometimes the lynchpin requirements are technical. Electric cars were infeasible without lithium batteries and lightweight composites. Sometimes the requirements also involve structural issues. Digital readers and e-books first came to market in 1998, but it took nearly a decade to resolve problems around limited memory and storage, title selections, copyright, conflicting file formats and other technical issues before e-books captured significant consumer market share.

AT&TAT&T
Rocket e-book, launched in 1998. Credit: Mark Richards Computer History Museum

For educators to be able to count on technology, it has to work with the reliably of a lightswitch. And for decades, it has not. Justeight percent of all computers in U.S. public schools had internet access in 1995. A decade later, that figure jumped to 97 percent—yet only 15 percent of all public schools enjoyed wireless connection. Software incompatibility and technical problems, such as creating and managing accounts, proved problematic for educators. Nearly half of the educators surveyed in 2008 by the National Education Association reported feeling adequately prepared to integrate technology into instruction. Fewer than one-third used computers to plan lessons or teach.

In economics, things take longer to happen than you think they will.

—Rudiger Dornbusch, MIT economics professor

Today, more than 77 percent of U.S. school districts offer bandwidth speeds of 100 kbps per student for accessing online resources. This, coupled with cloud computing services that allow apps, services and data to be accessed and shared on the web, have made technology much more feasible for use. The marketplace for online educational tools has also grown; Apple’s store now boasts more than 80,000 such apps. Interoperability standards are beginning to ease how data from different schools systems and instructional tools are stored and shared. From 2013 to 2015, U.S. K-12 schools purchased more than 23 million devices, according to Futuresource Consulting.

“In economics, things take longer to happen than you think they will,” Rudiger Dornbusch, the late MIT economics professor, once said, “and then they happen faster than you thought they could.”

Today’s education technology has matured after decades of fits and starts. Improved bandwidth, cloud computing power and distribution channels such as app stores, among other infrastructural improvements, have helped developers make technologies more accessible, affordable and, most importantly, reliable for students and teachers to use.

Yet the question remains: What will technology do once it is in the hands of teachers and students? To better understand the interplay of new technologies and instructional practices, we’ll explore how edtech tools in three popular categories—math, English Language Arts and assessment—have evolved over time, how they reflect the pedagogical trends and then what this means in the context of Puentedura’s framework.

How are these products changing?

To better understand how instructional practices have transformed, we’ll explore how the capabilities of tools

Product Profiles: What Today’s Tools Offer

SAMR: Is Technology Making the Difference?

Case Studies: From Technology to Practice

Transforming Education through Technology

by AT&T

Mobile technology, applications, and services are empowering students to achieve, removing barriers to graduation, enabling teachers, and preparing today’s learners for the jobs of tomorrow. Through the AT&T Aspire Accelerator, AT&T invests in startups that share the company’s goal of transforming education through technology. The six month program is designed to accelerate the startup organizations–both for- and non-profit–that have the potential to improve student success and career readiness. Participants receive a financial investment, access to expertise, services and relationships tailored to their organization and expert mentors from the education and technology ecosystems.

Product Profiles: What Today’s Tools Offer

How have today’s technologies evolved to help children develop math and reading abilities—the two core competencies that typically reflect how well they’re learning in school? And how do new tools allow them to demonstrate what they know, aside from traditional paper-and-pencil tests?

Math

In Search of the Middle Ground

“Who gets to learn mathematics, and the nature of the mathematics that is learned, are matters of consequence.”

Alan Schoenfeld, UC Berkeley Math Professor

Is it more important for kids to memorize math formulas and compute—or understand concepts and create their own approaches to solving problems? Whether students use pencils or iPads, the question has long stirred impassioned discussion among parents, teachers, mathematicians and policymakers. In 2004 University of California, Berkeley math professor, Alan Schoenfeld, described this debate as “Math Wars” that have persisted throughout the 20th century.

Disagreements persist today between “traditionalists” who believe math instruction should focus on calculations and processes, versus “reformers” who want students to develop the logical and conceptual understanding behind math. The “New Math” movement of the 1950s, championed by professional mathematicians, attempted to introduce conceptual thinking, such as the ability to calculate in bases other than 10. (Below is a satirical song by pianist and mathematician Tom Lehrer.) The effort floundered, derided by parents, teachers and mathematicians who lampooned the instruction as overly abstract and conceptual.

A 2007 report from the National Mathematics Advisory Panel, assembled by the U.S. Department of Education, summed up these battles as a struggle over:

“How explicitly children must be taught skills based on formulas or algorithms (fixed, 2 step-by-step procedures for solving math problems) versus a more inquiry-based approach in which students are exposed to real-world problems that help them develop fluency in number sense, reasoning, and problem-solving skills. In this latter approach, computational skills and correct answers are not the primary goals of instruction.”

This polarization is “nonsensical,” Schoenfeld noted. The two approaches are not mutually exclusive. Why can’t math instruction embrace both procedural and conceptual knowledge?

The Common Core math standards, released in June 2010, is the latest attempt to find a middle ground. Originally adopted by 46 states, the standards aim to pursue “conceptual understanding, procedural skills and fluency, and application with equal intensity.” Yet some students, parents and teachers have heckled the standards for befuddling homework problems and tests. It seemed not even curriculum developers knew how to translate Common Core math principles into instructional materials. See one example of a math problem gone “viral.” Concerns about “fuzzy math” resurfaced, amplified through social media channels and YouTube.

Yet one fundamental difference between the math wars today and those of a half century ago is that today’s technology—in the form of Google or software such as Wolfram Alpha—can solve nearly any math problem with clicks and swipes. This ability will influence what teachers teach and how those subjects are taught.

“Math has been liberated from calculating,” proclaimes Conrad Wolfram, strategic director of Wolfram Research. Computers, he states, can allow students to “experience harder problems [and be] able to play with the math, interact with it, feel it. We want people who can feel the math instinctively.”

How Math Tools Evolved

From Drilling to Adapting

The earliest instructional math software didn’t offer much in the form of instruction. In 1965, Stanford University professor Patrick Suppes led one of the first studies on how a text-based computer program could help fourth-grade students achieve basic arithmetic fluency. The program displayed a problem and asked students to input an answer. Correct responses would lead to the next problem, while incorrect ones would prompt a “wrong” message and give students another chance to get the correct answer. If this second attempt was still incorrect, the program would show the correct answer, and repeat the problem to help reinforce the facts.

AT&TAT&T

Credit: Number Munchers (left) and Math Blaster (right)

Decades later, many instructional math software would retain the same “drill-and-kill” approach. This trend was best reflected in the popularity of games such as Number Munchers and Math Blaster in late ’80s and throughout the ’90s, which also incorporated gaming elements such as points and rewards into their drill exercises.

Even so, during the 1960s, when enthusiasm for artificial intelligence was on the rise, university researchers began work on “intelligent tutoring systems” aimed at identifying a student’s knowledge gaps and surfacing relevant hints and practice problems. There were limitations, to be sure; researchers lacked enough fine-grain data for their algorithms to make useful inferences. Yet after decades of research, Carnegie Mellon University researchers released one of the first commercially available K-12 educational software programs, Cognitive Tutor. That was followed a year later with ALEKS, based on the work at researchers at University of California, Irvine. The products use different cognitive architecture models to attempt to deduce what a student knows and doesn’t. (To learn more about what happens inside these engines, check out our EdSurge report on adaptive learning edtech products.)

More recently other “adaptive” math tools use frequent assessments to try to pair appropriate content with learners. When a student answers a question incorrectly, such programs attempt to identify knowledge gaps and surface relevant instructional materials. Some tools, like KnowRe, will provide instructions on how to solve a problem. Others tools reinforce procedural concepts in videos that offer instruction ranging from step-by-step explanations (Khan Academy), to animations (BrainPOP), to real-world scenarios (Mathalicious).

Despite the ability of technology to deduce what students need and provide instruction, developers also recognize that educators must still retain their instructional role. DreamBox, which sells adaptive math software, recently added features to allow teachers more control over content assignment. “While we are still really focused on building student agency, we also want to ensure that we build teacher agency,” says Dreambox Chief Executive Officer and President Jessie Woolley-Wilson.

‘Seeing’ Math Beyond Symbols

Everyone uses visual pathways when we work on mathematics and we all need to develop the visual areas of our brains.

—Jo Boaler, education professor at Stanford University

Math is often represented by symbols (+ − x ÷), but technology today allows developers to eschew traditional notations to allow students to explore math in more visual and creative ways. There is supporting evidence: Researchers have observed Brazilian children street vendors performing complex arithmetic calculations through transactions (“street mathematics”) but struggling when presented with the same problems on a formal written test.

“We can make every mathematical idea as visual as it is numeric,” says Stanford education professor and YouCubed co-founder Jo Boaler. Boaler has studied neurobiological research on how solving math problems stimulates areas of the brain associated with visual processing.

“Everyone uses visual pathways when we work on mathematics and we all need to develop the visual areas of our brains,” she wrote in a recent report.

In the 1980s, tools including Geometer’s Sketchpad offered learners ways to explore math visually through interactive graphs. Today’s tools allow teachers to create their own activities and for students to share their work. Desmos, a browser-based HTML5 graphing calculator, invites them to explore and share art made with math equations. “There’s enormous value in allowing students to create, estimate, visualize and generalize,” says Dan Meyer, chief academic officer at Desmos, “but a lot of math software today just allows them to calculate.”

Educational game developers have also found ways to introduce mathematical concepts without using symbols. ST Math (the two letters stand for spatial-temporal), uses puzzles to introduce Pre-K-12 math concepts without explicit language instruction or symbolic notations. Another popular game, DragonBox, lets students practice algebra without any notations. BrainQuake aims to teach number sense through puzzles involving spinning wheels.

Although games can make math more engaging, students may need support from teachers to apply skills learned from the game to schoolwork and tests. “One of the ways video games can be extremely powerful,” says Keith Devlin, a Stanford professor, co-founder and chief scientist of Brainquake and NPR’s “Math Guy,” “is that when a kid has beat a game, he or she may have greater confidence to master symbolic math. I think a two—step approach—video game and teacher—can be key in helping students who hate math get up to speed.”

AT&T

Source: EdSurge

ELA

Teaching Reading in America

“The more that you read, the more things you will know. The more that you learn, the more places you’ll go.”

Dr. Seuss

Like math, literacy has had its own “Reading Wars” (or “Great Debate”) throughout the 20th century. Proponents of a phonics-based approach believed students should learn to decode the meaning of a word by sounding out letters. But in English, not all words sound the way they are spelled, and different words may sound alike. Alternatively, other researchers and educators advocate a “whole language” approach that incorporates reading and writing, along with speaking and listening.

The back-and-forth debate eventually reached policymakers, who were alarmed by the 1983 report, “A Nation at Risk,” that charged that American students were woefully underprepared compared to their international peers. In California, poor results on the 1992 and 1994 National Assessment of Educational Progress reading test—more than half of fourth-grade students were reading below grade level—fueled critiques of the state’s whole-language approach.

In 1997, the National Institute of Child Health and Human Development convened a national panel of literacy researchers and educators to evaluate and recommend guidelines. Published in 2000, the report recommended a mix of two approaches, stating that “systematic phonics instruction should be integrated with other reading instruction to create a balanced reading program.” The authors added:

… literacy acquisition is a complex process for which there is no single key to success. Teaching phonemic awareness does not ensure that children will learn to read and write. Many competencies must be acquired for this to happen.

The findings allayed some of the debate over how to teach reading. But the Common Core reading standards raised new questions around what reading materials should be taught, including nonfiction and informational texts that “highlight the growing complexity of the texts students must read to be ready for the demands of college, career, and life.” The standards also aimed to set a higher bar for literacy beyond reading. Students were expected to be able to cite text-specific evidence in argumentative and informational writing.

Yet for all the focus on facts and evidence, the standard writers did not specify what should be read at each grade level. While they offer examples of books appropriate for each grade, states and districts are expected to determine the most appropriate content. In setting high expectations for what students should be able to read, but refraining from offering specific steps to get there, educators wound up left to look for their own resources. This ambiguity has given license to publishers, researchers and entrepreneurs to shape that path.

How ELA Tools Evolved

AT&T

Source: EdSurge

Tracking Readers

Digital book collections have long promised to expand the availability of fiction and nonfiction books. But now such tools also offer teachers a more convenient way to track reading than reviewing students’ self-recorded logs. Today’s products offer data dashboards that chronicle how many books were read, how long students spent reading and which vocabulary words students looked up. Often digital texts come embedded with questions written by content experts or, in some cases, created by teachers themselves.

Given the capability of tools to capture information about students’ reading habits, it’s “important for teachers to have frameworks and dashboards to make that data actionable,” says Jim O’Neill, chief product officer at Achieve3000. “By having a sense of whether students are comprehending the text, or how much they’ve read, teachers can provide the appropriate follow-up [support].”

Let’s Lexile

The broad scope of available online reading materials makes a traditional challenge even more front and center: How can teachers identify what texts are most appropriate for students? Figuring out the right level of complexity for every student—including subject matter, text complexity, or other factors—is subjective and, at best, an inexact science. Both educators and developers have turned to reading frameworks that attempt to quantify text difficulty by measures such as word length, word count and average sentence length.

“Almost every major edtech literacy company will report on text complexity in some form,” adds O’Neill. A popular framework used by his company and other adaptive literacy products is the Lexile, which measures readers’ comprehension ability and text difficulty on a scale from below 0L (for beginning readers) to over 2000L (advanced) based on two factors: sentence length and the frequency of “rare” words. Many products today will assign students a Lexile score (based on how they perform on assessments after reading a text) and recommend reading content at the appropriate level. Some companies, such asNewsela and LightSail, present the same content rewritten at different Lexile levels so that students can read and discuss the same story.

AT&TAT&T

Despite the popularity of Lexile levels, some researchers such as Elfrieda Hiebert, a literacy educator and chief executive officer ofText Project, preach caution against relying exclusively on Lexile numbers to find grade-appropriate texts. She has pointed out, for instance, that The Grapes of Wrath, a dense book for most high schoolers, has a lower Lexile score (680L) than the early reader book, Where Do Polar Bears Live? (690L). The former has shorter sentences (with plenty of dialogue) while the latter has longer ones.

The Lexile is just one of seven different computer formulas that Common Core standards writers have found to be “reliably and often highly correlated with grade level and student performance-based measures of text difficulty across a variety of text sets and reference measures.” Established companies, including Pearson and Renaissance Learning, have developed alternatives to Lexile. Another effort, the Text Genome Project, which Hiebert is advising, uses machine learning technology to identify and help students learn the 2,500 related word families (such as, help, helpful, helper) that make up the majority of texts they will encounter through high school.

Nod to Nonfiction

The Common Core is not the first effort to emphasize nonfiction and informational texts. In 2009, the National Assessment of Educational Progress (NAEP) called for a 50-50 split between fiction and nonfiction reading materials for fourth-grade students, and a 30-70 ratio by twelfth grade. Common Core reinforced that message: A 2015 NAEP survey found that the percentage of fourth-grade teachers who used fiction texts “to a great extent” declined from 63 percent to 53 percent between 2011 and 2015, while the nonfiction rose from 36 to 45 percent over the same period.

AT&T

Source: National Assessment of Educational Progress

Companies have noted this shift and many offer nonfiction content as a selling point. Achieve3000, LightSail Education andNewsela employ both writers who will produce their own nonfiction articles and syndicated stories from news publishers that they rewrite at different Lexile levels. Such content also comes embedded with formative assessments to gauge students’ reading comprehension. Other startups, such as Listenwise, offer audio clips from public radio stations, along with comprehension and discussion questions, to help students build literacy through online listening activities.

Writing to Read

“Writing about a text should enhance comprehension because it provides students with a tool for visibly and permanently recording, connecting, analyzing, personalizing, and manipulating key ideas in text.”

So state the authors of “Writing to Read,” a meta-analysis published in 2010 of 50 years’ worth of studies on the effectiveness of writing practices on students’ reading grades. The need for this skill only grows in the internet era, as students need to be able to comprehend, assess, organize and communicate information from a variety of sources.

According to the Common Core writing standards, students are expected to start writing online by fourth grade, and by seventh grade should be able to “link to and cite sources as well as to interact and collaborate with others.”

Online writing tools—most notably Google Docs, which the company boasts has more than 50 million education users—allow teachers and students to comment and collaborate in the cloud. The industry standard remains MYAccess with patented technology to automatically score papers and provide customized feedback. NoRedInk and Quill offer interactive writing exercises that let students sharpen their technical writing skills and grammar. Other startups, such as Citelighter and scrible scaffold the research and writing process to help students organize their notes and thoughts. Their progress—words written, sources cited, annotations—are captured on a dashboard that teachers can monitor.

Other tools are more ambitious. CiteSmart, Turnitin and WriteLab use natural language processing to provide automatic feedback beyond the typical spelling and grammar checks and attempts to point out errors in logic and clarity. (Our test run with these tools, however, found questionable feedback, suggesting they still need fine-tuning. There are still some core instructional tasks, it turns out, that technology has yet to perfect.)

Assessment

In Search of the Middle Ground

Through embedded assessments, educators can see evidence of students’ thinking during the learning process and provide near real-time feedback through learning dashboards so they can take action in the moment.

2016 National Education Technology Plan

Students find tests stressful for good reason. Results not only evaluate what they have learned, but can be used to determine whether they graduate or get into college. Such assessments are “summative” in that they aim to evaluate what a student has learned at the conclusion of a class. In 2002 when the U.S. government tied school funding to student outcomes through the No Child Left Behind law, tests became stressful for educators as well.

With so much at stake, testing became a top priority in many classrooms. A 2015 survey of 66 districts by the Council of Great City Schools found that U.S. students on average took eight standardized tests every year—which means by the time they graduated high school, they would have taken roughly 112 such tests. Testing fever was followed by fatigue; nearly two-thirds of parents in a Gallup poll released that year said there was too much emphasis on testing.

But tests need not be so punitive. For decades, education researchers have argued that tests can be used during—not after—the learning process. In 1968, educational psychologist Benjamin Bloom argued that “formative” assessments could diagnose what a student knew, enabling teachers to adjust their instruction or provide remediation. Students could also use these results to better understand and reflect on what they know.

There’s no emotional stress associated with formative assessments. They help teachers engage with students during the learning process.

—Cory Reid, chief executive officer of MasteryConnect

To check for understanding, teachers can use formative assessments in the form of short quizzes delivered at the beginning or end of class, journal writing and group presentations. (Here are 56 examples.)

“There’s no emotional stress associated with formative assessments,” said Cory Reid, chief executive officer ofMasteryConnect. “They help teachers engage with students during the learning process.”

“In moderation, smart strategic tests can help us measure our kids’ progress in schools [and] can help them learn,” President Obama said in a video address.

“Tests should enhance teaching and learning,” Obama continued. In December 2015, he signed the Every Student Succeeds Act, allowing states more flexibility in determining how and what they could use to assess students. By doing so, the government opened the door to let states decide what works best for their schools.

Summative tests still remain, but the industry has shifted its focus to embedding tests to make them an integral part of the teaching and learning process. In addition academic achievement is no longer the primary focus; technologists are attempting to quantify non-cognitive factors, including student behavior and school culture, all of which impacts how students learn.

How Assessment Tools Evolved

AT&T

Credit: Vixit/Shutterstock

The Many Forms of Formative Tests

In the 1970s, Scantron Corporation offered one of the most popular and commercially successful technologies for doing formative and summative tests: bubble sheets that students would fill out with #2 pencils that could be automatically graded. A couple decades later, “clickers”—devices with buttons that transmit responses to a computer—offered an even quicker way for teachers and students to get feedback on multiple-choice questions.

Today, web-based and mobile apps can deliver formative assessments and results cheaper and more efficiently. Smartphones and web browsers have become the new clicker to deliver instantaneous feedback. In classrooms where not every student has a computer or a phone, some teachers use apps to snap photos of a printed answer sheet and immediately record grades. And as teachers use more online materials, there are also tools that allow them to overlay questions on text, audio or video resources available on the internet.

Student responses from formative assessment tools can be tied to a teacher’s lesson plans or a school’s academic standards. This information can help teachers pinpoint specific areas where students are struggling and provide targeted support.

Faster feedback also means that assessments can be given even as lessons are going on. “If you know what a student knows when they know it, that informs your instruction as a teacher,” says Reid. That data can “enrich your teaching and help change a student’s path or trajectory.”

Beyond Multiple Choice

The Common Core tests, which many students take on computers, introduced “technology-enhanced items” (TEIs). These allow students to drag-and-drop content, reorder their answers and highlight or select a hotspot to answer questions. Such interactive questions, according to the U.S. Department of Education’s 2016 National Education Technology Plan, “allow students to demonstrate more complex thinking and share their understanding of material in a way that was previously difficult to assess using traditional means,” namely through multiple choice exams.

AT&T

Source: U.S. Department of Education, Office of Educational Technology, Future Ready Learning: Reimagining the Role of Technology in Education, Washington, D.C., 2016.

A well-designed TEI should let educators “get as much information from how students answer the question in order to learn whether they have grasped the concept or have certain misconceptions,” according to Madhu Narasa, CEO of Edulastic. His company offers a platform that allows educators to create TEIs for formative assessments and helps students prepare for Common Core testing. Another startup, Learnosity, licenses authoring tools to publishers and testing organizations to create question items. (Here are more than 60 different types of TEIs.)

Yet teachers and students need training to use TEIs. And the latest TEIs may not always work on older web browsers and devices. One early version of the Common Core math test developed by Smarter Balanced Assessment Consortium featured TEIs that even adults found difficult to use. And, while TEIs offer more interactivity, their effectiveness in measuring student learning remains unproven. A 2015 report from Measured Progress, another developer of Common Core tests, suggested “there is not broad evidence of the validity of inference made by TEIs and the ability of TEIs to provide improved measurement. Without such research, there is no way to ensure that TEIs can effectively inform, guide, and improve the educational process.”

Show Me Your Work

Tests are not the only way for students to demonstrate understanding. Through hands-on projects, students can demonstrate both cognitive and noncognitive skills along with interdisciplinary knowledge. A science fair project, for example, can offer insights into students’ command of science and writing, along with their communication, creativity and collaboration skills.

The internet brought powerful media creation tools—along with cloud-based storage—into classrooms, allowing students to create online. Companies such as FreshGrade offer digital portfolio tools that aim to help students document and showcase their skills and knowledge through projects and multimedia creations in addition to homework and quizzes. Through digital collections of essays, photos, audio clips and videos, students can demonstrate their learning through different mediums.

Games as Test

AT&T

Credit: SimCity

What can games like SimCity, Plants vs. Zombies and World of Warcraft tell us about problem-solving skills? A growing community of researchers, including Arizona State University professor James Paul Gee, argue that well-designed games can integrate assessment, learning and feedback in a way that engages learners to complete challenges. “Finishing a well-designed and challenging game is the test itself,” he wrote in 2013.

GlassLab, a nonprofit that studies and designs educational games, has developed tools to infer mastery of learning objectives from gameplay data. These tests are sometimes called “stealth assessments,” as the questions are directly embedded into the game. The group has described at length how psychometrics, the science of measuring mental processes, can help game designers “create probability models that connect students’ performance in particular game situations to their skills, knowledge, identities, and values, both at a moment in time and as they change over time.”

A 2014 review of 69 research studies on the effectiveness of games by research group, SRI International, offers supporting evidence that digital game interventions are more effective than non-game interventions in improving student performance. But other studies offer a mixed picture. A study led by Carnegie Mellon University researchers on a popular algebra game, Dragonbox, found that “the learning that happens in the game does not transfer out of the game, at least not to the standard equation solving format.” Similar to the Brazilian “street math” kids (see math profile), these students are capable of solving math problems—just not on a traditional paper exam.

Noncognitive Skills

Educators and researchers also believe that non-cognitive skills—including self-control, perseverance and growth mindset—can deeply influence students’ academic outcomes. In 2016, eight states announced plans to work with the nonprofit CASEL(Collaborative for Academic, Social, and Emotional Learning) to create and implement standards around how social and emotional skills can be introduced into classroom instruction.

Today, developers are seeking ways to quantify factors such as student behavior and school climate. Tools such as Kickboard andLiveSchool record, track and measure student behavior. Panorama Education lets educators run surveys to learn how students, families and staff feel about topics such as school safety, family engagement and staff leadership. Tools like these expand the use of assessments beyond simply measuring student performance on specific subjects and cognitive tasks.

SAMR: How Will We Know If Technology Will Make a Difference?

Will shiny gadgets help educators do the same thing—or enable new modes of teaching and learning? Here’s a popular framework to help us understand how technology can change practice.

No matter what features are built into an edtech product, the technology is unlikely to impact learning if it’s misapplied. “Putting technology on top of traditional teaching will not improve traditional teaching,” said Andreas Schleicher, director for the Directorate of Education and Skills at the Organisation for Economic Co-operation and Development, in aninterview with EdSurge earlier this year.

A 2015 report by the OECD found “no appreciable improvements in student achievement in reading, mathematics, or science in the countries that had invested heavily in ICT for education.” Noted Schleicher:

“The reality is that technology is very poorly used. Students sit in a class, copy and paste material from Google. This is not going to help them to learn better.”

But there are several corollaries. First, not every traditional teaching practice needs to be reinvented—some are working well. Second, not every technology can “revolutionize” learning. And third, to get powerful results, the kind that drive student learning, technology must be aligned with practice in purposeful ways.

But first, educators need to know which is which.

As a teaching fellow at Harvard University in the late 1980s, Ruben Puentedura started paying attention to how educators used tools in the classroom. Later, as the director of Bennington College’s New Media Center, he further explored how faculty and students integrated technology and instruction to reach the best learning outcomes. His efforts led him to start a consulting firm, Hippasus, that works with schools and districts to adopt technology.

In 2002, he published the SAMR framework to help educators think about how to integrate instructional practice and technology to reach the best outcomes for students. SAMR defines how technology impacts the teaching and learning process in four stages:

S

Substitution

Tech acts as a direct tool substitute, with no functional change in instruction

A

Augmentation

Tech acts as a direct tool substitute, with functional improvement

M

Modification

Tech allows for a significant task design

R

Redefinition

Tech allows for the creation of new tasks, previously inconceivable

The SAMR framework is centered around the premise that technology, when used strategically and appropriately, has the potential to transform learning and improve student outcomes. Puentedura has also applied this framework to existing education research to suggest that greater student outcomes can occur when edtech tools are used at the later stages of the framework (modification and redefinition).

Preparing to use SAMR

To start, Puentedura says teachers must be clear about what outcome they want for their students. “The purpose, the goals of teachers, schools and students, are the key drivers in how technology is used,” he says.

“What is it that you see your students not doing that you’d like them to do? What type of knowledge would you like them to explore that they’re not exploring? What type of opportunities for new visions, new ideas, new developments would you like to pick up on?”

Additionally, it is important for teachers to identify how technology is currently used in the classroom, as a reference point for moving through the stages of SAMR. This requires an understanding of available resources—not just the technology that students can access, but also time and support required to use the tools well.

Changes in the tools themselves matter less than how you’re thinking about the learning objectives.

—Jim Beeler, Chief Learning Officer at Digital Promise*

New technologies are often first used at the substitution level, especially when teachers and students are unfamiliar with the tools. This level of usage has its merits, even if it may not radically change instructional practices. Reading digital textbooks may, in the long run, be cheaper for schools than ordering new print versions every time the content is updated. Having students compose essays using a cloud-based word processor makes it easier for teachers to track and grade assignments.

The SAMR framework is not just about technology in and of itself, but rather what educators and students can use the tech to accomplish. “Changes in the tools themselves matter less than how you’re thinking about the learning objectives,” explains Jim Beeler, Chief Learning Officer at Digital Promise, who has helped schools rollout programs where every student has a digital device (called 1:1 programs). After all, the same tool can be used in different stages. A digital textbook, for example, can used as a substitute for print if all students do is read, highlight and annotate. But if the textbook includes speech synthesis or audio features, the students’ reading experience is augmented through the addition of the auditory mode of learning.

A Primer on SAMR

Here are some guiding questions and a familiar type of assignment as an example—sharing reflections on a reading assignment—to better illustrate the SAMR framework in practice.

Samr ruben

Ruben Puentedura

Are you going to get more impact upon student outcomes from using technology at the R level than at the S level?

I’m using a technology but I don’t know where I am within the SAMR Framework

Answer the following questions to figure out where you are within the framework

SAMR Misconceptions

Although Puentedura’s studies suggest that greater student outcomes can be achieved at the redefinition level, he warns against the notion that every teacher should aspire to use technology to redefine their practice. “Are you going to get more impact upon student outcomes from using technology at the R level than at the S level? Sure,” he says, “but that doesn’t mean that there aren’t many, in fact, probably a large majority of technology uses that work just fine at the S and A level.”

  1. SAMR is just about using technology

    SAMR is designed to analyze the intersection of technology and instructional practices. The framework is designed to focus on the changes that technology enables—not the technology itself. Make no mistake—educators and students are the ones that make learning happen, not the technology.

  2. It is better to be further “up” the framework

    Not every instructional practice needs to be redefined; as Puentedura points out, often “substitution” can be the right form of change. It can be exhausting and inappropriate for teachers and students to constantly teach and learn at the modification and redefinition levels. Educators need to find the right mix of activities that are appropriate for their learning objectives and employ technology in the way that best fits those goals.

  3. Change is always necessary

    Don’t change just for the sake of change. SAMR—or any other framework—may offer a way to describe changes in technology usage. But that does not mean that teachers should continually strive to change their practices. Teachers must have a clear vision of their instructional goals and desired student outcomes before devising ways to implement new tools in a classroom.

Samr ruben

Ruben Puentedura

Can SAMR help schools make smarter purchasing decisions?

Case Studies: From Technology to Practice

Technology can make a difference. Here are a dozen profiles of how educators from across the country have used tools to support instructional needs and transform teaching practices.

S

A

M

R

Math

A Free Tool to Keep a Pulse on Student Learning

SUBSTITUTION + MATH

Addressing the Gaps of All Learners

AUGMENTATION + MATH

Learning Linear Equations in One Week, Not One Year

MODIFICATION + MATH

Playlists That Put Students in Control

REDEFINITION + MATH

ELA

Read All You Want

SUBSTITUTION + ELA

Ditch the Paper. Let’s Make a Podcast!

AUGMENTATION + ELA

90 Second Videos That Inspire Discussion

MODIFICATION + ELA

Taking Reading Assignments To The Next Level

REDEFINITION + ELA

Assessment

Forms for Formative Assessments

SUBSTITUTION + ASSESSMENT

Custom-Built Quizzes For Real-Time Intervention

AUGMENTATION + ASSESSMENT

Formative Assessments Enriched With Data

MODIFICATION + ASSESSMENT

From Paper and Pencil to Real World Assessment

REDEFINITION + ASSESSMENT

SUBSTITUTION + MATH

Conclusion

Technology is often conflated with innovation. Yet tools are just part of the equation. Innovation entails humans changing behavior.

In education, technological improvements—in the form of faster broadband, devices or smarter data analytics—must be commensurate with the desire to refine and transform existing practices. What these changes look like is unsettled, but technology allows teachers and students to explore different paths.

Well-designed tools can help educators realize the educational “best practices” put forth decades ago by researchers like Benjamin Bloom. Data from formative assessments can give teachers better insights into what each learner needs and change strategies. Games and online collaborative projects allow educators teach in ways that researchers believe can better engage students.

The most useful educational tools are also flexible. Teachers are also adapting media and productivity software for purposes beyond what they were designed for.

After all, what a math class needs may not be online adaptive curriculum, but rather creative tools that allow students to engage and express knowledge in new ways.

Changing ingrained habits and codified practices requires patience. Not all lectures, lesson plans, group projects or homework demand to be uprooted. As our case studies above show, some teachers use technology to do the same tasks more efficiently. Others are creating entirely new activities that transform learning from a solo to social experience.

Whether teachers reinforce or redefine instructional practices with technology partly depends on their environment. Do they have the training to implement new tools? How can schools support teachers in not just experimenting with new methods of teaching and learning—but scale these practices across the campus and district? How can these changes make education opportunities more equitable? These questions will help frame the focus of the next chapter. As classrooms change, so do schools.

Feds Launch Inaugural Teacher and School Leader Grant Competition

The United States Department of Education yesterday announced a new grant competition to train teachers that serve low-income and minority children.

The Teacher and School Leader (TSL) program is designed to assist states, local educational agencies and nonprofit organizations in developing, implementing, improving and expanding “comprehensive performance-based competition systems or human capital management systems for teachers, principals and other school leaders,” according to the program website.

The TSL grants are especially geared to help educators in high-need schools where there is a need to “raise student academic achievement and close the achievement gap between high- and low-performing students.” Approximately $250 million has been requested for the program for FY 2017, but funding is contingent future appointments to Congress.

The grants will:

  • Allow educators to identify opportunities to improve their schools;
  • Create professional development and support systems that are tailored to educators’ individual needs; and
  • Help districts and schools attract “a diverse, effective workforce,” according to the TSL program site.

Authorized under the Every Student Succeeds Act, the TSL program replaces the Teacher Incentive Fund (TIF) program that provided $2 billion to fund similar efforts to increase student achievement over the last 10 years, particularly for math and science.

Applications for the grant competition opened today. The deadline to submit applications is March 24, 2017. Apply are available on the ED site here

Ka’Ching! 2016 US Edtech Funding Totals $1 Billion

This is a repost of an article that appeared on EdSurge

Santa proved a little more parsimonious to U.S. edtech companies, which altogether raised an estimated $1.03 billion across 138 venture deals in 2016. Those tallies dipped from 2015, which saw 198 deals that totalled $1.45 billion. (Or, from a different perspective, U.S. edtech companies raised roughly 57 percent of what Snapchat did in its $1.8 billion Series F round.)

In this annual analysis, EdSurge counts all investments in technology companies whose primary purpose is to improve learning outcomes for all learners, regardless of age. This year startups that serve primarily the K-12 market raised $434 million; those targeting the postsecondary and corporate learning sector raised $593 million.

Since 2010, venture funding dollars for U.S. edtech startups have increased every consecutive year. It’s worth noting that even though 2016 marked the end of this trend, the dollar total still surpasses the years before 2015.

The downturn isn’t specific to the education industry but rather reflects a broader slowdown across all technology sectors, says Tory Patterson, managing partner at Owl Ventures. “There’s a broader shift in venture capital where there’s less exuberance companies that haven’t really nailed the business model,” he tells EdSurge.

Dealflow dips has also been felt in the health, real estate, construction and financial technology sectors. Across the globe, venture deals returned to 2014 levels, according to CB Insights. The market uncertainty has led some high-profile companies to hit pause on bigger plans. SoFi, which offers loans and other student services, pushed back plans for its initial public offering this year. Pluralsight, an online learning company that was also expected to IPO, is also on hold.

Venture-backed startups tend to swing between two spectrums, says Amit Patel, a partner at Owl Ventures. On one end are businesses “that grow aggressively but have no revenue associated. The other are those laser focused on business model and revenue. The mood is swinging towards the latter.”

Commitments to “impact” or “mission” aside, all investors—even in education—want to see returns. Often that means converting users into dollars.

“We’ve noticed VCs becoming more selective about their education investments, asking more questions about revenue growth and the leading indicators of product adoption, implementation timelines and ultimately usage,” says Jason Palmer, a general partner at New Markets Venture Partners. Unlike Instagrams and other “5-year consumer internet hits,” more investors, according to Palmer, now realize “it can take 10 or 15 years to build a sustainable education business.”

Breaking Down the Numbers

As in previous years, companies offering tools in the postsecondary and “other” categories out-raised other products. (“Other” includes a mix of products that help business professionals develop skills, are aimed at parents, or are not used in K-12 or higher-ed institutions.)

Expect this trend to continue, says Palmer, as investors come to “a greater recognition that higher education institutions adopt and implement more rapidly than K-12 [schools].” Tuition dollars may be one reason why they have adopted technologies such as student retention and predictive analytics platform. “Colleges and universities are facing financial pressures to keep students who contribute to their revenues. In K-12, you don’t have the same urgency of students as revenue drivers,” he suspects.

This year saw no mega-rounds for startups in the postsecondary sector—unlike 2015, which saw HotChalk, Udacity, Udemy, Coursera and Civitas Learning account for more than $520 million of funding. (Udemy did lead this pack in 2016 with a $60 million round.)

In fact, the biggest funding round of 2016 for a U.S.-based startup went to Age of Learning, which raised $150 million and accounts for 55 percent of the funding total for K-12 curriculum products. The Glendale, Calif.-based company is the developer of ABCmouse, a collection of online learning activities aimed for young children. First developed for the consumer and parent market, the tool is attempting to make headway into schools and classrooms.

Choosier Angels

Angel and seed level funding rounds, which signal investors’ interest in promising but unproven ideas, saw a small decline as well. The 66 deals at this stage are the lowest since 2011, although they totaled $62.5 million—roughly on par with 2014 levels.

Over the past five years, the average value of seed rounds has been increasing, from around $600K in the early years of this decade to roughly $1 million in 2015 and 2016. Discounting edtech accelerators, which typically invest $20K to $150K in startups, the 2016 seed round average actually surpasses $2 million. (We counted 28 such publicly disclosed seed rounds totaling $60.2 million)

Fewer but bigger seed deals are “a sign of maturation in the industry,” says Shauntel Poulson, a general partner at Reach Capital. Unlike previous years, where upstarts and ideas popped up the market, she believes the market is currently in a “stage of consolidation where leaders and proven ideas are emerging.”

Aspiring entrepreneurs ought to pay heed. What this means is that “the bar for seed rounds is getting higher,” Poulson adds. “Before it was about a promising idea and a great team. Now you need to show more traction and even some revenue.” Over the past few years investors have learned that “it’s best to focus on business model sooner rather than later.”

Palmer believes the days where startups could raise money before making some may be over. Expect to get grilled over “revenue growth, product adoption, implementation timelines and ultimately usage,” he says. To round out the questions, “VCs are also starting to ask about product efficacy.”

Looking Ahead

Unsurprisingly, investors held a cheery outlook for 2017, expecting funding totals to hold steady or even increase. More companies will be able to demonstrate sustainable revenue, predicts Owl Ventures’ Tory Patterson, and in turn woo investors’ appetite. “We think a lot of companies will be able to hit the $10 million revenue milestone.”

Emerging technologies such as artificial intelligence, augmented and virtual reality could drive further investments as their applications to help improve learning outcomes become clearer. Also expect to see Chinese investors paying closer attention, says Poulson. “There’s a big after-school market [in China] and an opportunity to leverage a lot of the content that’s being developed in the U.S.”

There’s also word on the street that several education-focused venture firms have re-upped their coffers with new funds to support proven, maturing startups. Stay tuned for more details.

Disclosure: Owl Ventures and Reach Capital are investors in EdSurge

The mind of a student today

December 26, 2014 Below is an interesting visual I cam across through a tweet from We Are Teachers. The visual maps out some really intriguing facts about students today. These facts are based on different studies and surveys conducted mainly on US students. I went through this resource and devised this brief synopsis: Minority students attending US schools will make up a majority of all students…

The Best of the Consumer Electronics Show 2016

Panasonic's transparent microLED display at CES 2016.

Above: Panasonic’s transparent microLED display at CES 2016.

Image Credit: Dean Takahashi
I’ve returned from the biggest battleground of tech, the Consumer Electronics Show in Las Vegas.
My Intel Basis Peak smartwatch told me that, over four days at CES, I walked 73,376 steps, or 18,344 steps per day. Those steps felt heavier this year because I carried a shoulder bag instead of using a roller bag, per the new security rules at the event. On the plus side, I managed to come back without the nerd flu and without a blister like last year.
I did my best, but that means I still only covered a very small percentage of the 3,000-plus companies spread across 2.4 million square feet of exhibit space at CES. My eyes began to glaze over as I saw the enormous numbers of drones, augmented reality glasses, virtual reality headsets, robots, smart cars, fitness wearables, 3D printers, and smart appliance that were part of the Internet of Things (making everyday objects smart and connected). I have published 63 stories about CES products and events. (I should say, I’ll continue to publish stories from CES over the next couple of weeks). I think this was my 20th CES, though I have lost count.
Inside the bubble of CES, which was attended by an estimated 150,000 people, I didn’t even know the stock market was melting down. CES is the place to look if we want to find the things that are going to save us from economic gloom, although we may have to really look. The global technology industry is expected to generate $950 billion in 2016, down 2 percent from a year ago, with the decline due in no small part to weakness in China. This year, I didn’t see much that was going to save the world economy and overcome the skepticism of natural-born cynics. You could certainly find partisans who will say that virtual reality or the Internet of Things will do that, as both movements have spread well beyond just one or two companies. But it’s a reach to say that these categories have already given us their killer apps.
Sill, I had a lot of fun finding things that I liked, and there was no shortage of these. Without further ado, here’s my favorite technology from CES 2016:
Panasonic Transparent Display
The idea of a transparent display isn’t that new. Big tech companies have been targeting them at retailers for a while. But this week Panasonic showed off a 55-inch television for the living room. The display is embedded in a bookcase, where it can transparently show a kind of trophy case behind the glass. But then it turns to black and shows home portraits. The image swivels to reveal a personalized screen with a weather report or a screen displaying a liquid-like aquarium. And it can even show a television show. The display has micro light-emitting diodes. While the screen is limited, as it isn’t completely transparent, it can display at a resolution of 1080p. This was a glimpse of the future, much like Panasonic’s Magic Mirror from a year ago. And I thought it was a wonderful example of how to make technology blend into the environment of the home.
Eyefluence
Jim Margraff, CEO of Eyefluence, wears an Oculus Rift headset.
Above: Jim Marggraff, CEO of Eyefluence, wears an Oculus Rift headset.
Image Credit: Dean Takahashi
Eyefluence was the shortest demo I did at CES, but it was enough to show me the future of using your eyes to control things. The tiny Eyefluence sensors are attached to the inside of an Oculus Rift virtual reality headset and detect the smallest movements in your eyes. I blinked, turned my head, and moved my eyes around, but Eyefluence could still track when and how I wanted to control something. I could navigate through a menu without using my hands, a keyboard, or a mouse. It was fast. It only takes about a minute to learn how to follow Eyefluence’s instructions, after which you can start controlling things that are before your eyeballs. This could very well supply a major ingredient missing from virtual reality headsets and augmented reality glasses.
Vayyar’s 3D sensing
Israeli startup Vayyar uses 3D imaging with radio waves to see through solid surfaces. It can be used to show a 3D model of a cancerous growth in a woman’s breast. It can be used to detect the heartbeat of a person, such as a sleeping baby, in another room. Or it can be used to find studs or pipes that are hidden in a wall. It can see through materials, objects, and liquids. Vayyar can also detect motion and track multiple people in large areas. It works by shooting a radio wave into a solid object and measuring all of the ways that the wave bounces around as it hits various objects. Vayyar collects the reflections and analyzes them, putting them back together as a 3D image in real time. While it is powerful, the amazing technology doesn’t use a lot of power. It comes from seasoned technologists Raviv Melamed, Miri Ratner, and Naftali Chayat, who were inspired by military technology. Melamed, formerly of Intel, told us that the technology is inexpensive. And yes, if you have the ability to see through things, you’re Superman.
ODG’s ultra-wide wide-angle augmented reality glasses
Dean Takahashi demos ODG's augmented reality glasses.
Above: Dean Takahashi demos ODG’s augmented reality glasses.
Image Credit: Dean Takahashi
The Osterhout Design Group has taken its technology for night-vision goggles and turned it into augmented reality headsets for government and enterprises. The newest R-7 headset is like looking at a 65-inch TV screen that’s right in front of your eyeballs. The company demoed a future-generation technology with ultra wide-angle viewing. The R-7 has a 30-degree field of view, but the future product has a 50-degree field of view with a 22:9 aspect ratio. It’s more like sitting in the best seat in an IMAX theater, said Nima Shams, vice president at ODG. I was able to look at it and see a wide Martian landscape. The glasses are packed with technology, from Wi-Fi and Bluetooth radios to gyroscopes and altitude sensors. The R-7 costs $2,750, but there’s no telling how much the wide-angle display will be. At some point in the future, I fully expect that his experience is going to be better than going to an IMAX theater.
Cypress’s energy-harvesting solar beacon
This solar-based Bluetooth energy beacon doesn't need a battery.
Above: This solar-based Bluetooth energy beacon doesn’t need a battery.
Image Credit: Cypress
Beacons are devices that can connect to your smartphone using a local Bluetooth network. Retailers like to use them to send special offers to your smartphone. That technique can target people walking by a specific store and get them to come inside. But Beacons often run out of battery. By combining technology from Spansion (which Cypress Semiconductor has acquired) and Cypress, the product designers can create a Beacon with a solar energy array. Using that technology, the device can generate its own electricity and doesn’t need a battery. You can embed this kind of technology in any device that is part of the Internet of Things (smart and connected everyday objects). You could put a Beacon in a cemetery and use it to send a story about the life of someone buried there. “We want the Internet of Things, but nobody wants to change 20 billion batteries,” said Eran Sandhaus, vice president at Cypress Semiconductor. Hundreds of potential advertisers are looking at it. We’ll definitely need new sources of power, whether kinetic or otherwise. This is how the Internet of Things is going to become practical, with billions of smart, connected objects that operate on the slimmest amount of power.
Netatmo’s Presence smart outdoor security camera
Netatmo has a smart security camera.
Above: Netatmo has a smart security camera.
Image Credit: Netatmo
Presence is a smart outdoor security camera that sends an alert based on an analysis of a scene. If someone is loitering around your house, Netatmo’s Presence will detect that person and send a message to your smartphone. It can detect the movements of your pet, or it can tell you if someone is dropping a delivery at your door. You can train the camera to stay in a particular zone and, using deep learning technology, analyze only certain types of motion. It also comes with a floodlight. Presence doesn’t dump a ton of video on you. You don’t have to take an online storage subscription out. When it identifies significant events, it saves the video so that you can view it, preventing you having to view long, unedited footage. Presence will be available in the third quarter.
LG Rollable Display
LG's rollable display
Above: LG’s rollable display
Image Credit: LG
Rollable and flexible displays seem like either science fiction or a waste of time. But the LG rollable OLED screen is real. We can roll up the screen like a newspaper, and, in fact, that might be a good use of the technology. LG is showing a prototype now that is as thin as paper and has a resolution of 810 x 1200, or almost 1 million pixels. I’m not sure how we’ll end up using it. But I suspect the roller display will find many usages over time. This makes me feel like technology is becoming as disposable and flexible as a poster. You can go somewhere, put up a rollable screen, and then turn your surroundings into a movie theater or living room.
AtmosFlare 3D drawing
3D drawing is pretty cool. Adrian Amjadi of AtmosFlare showed me how to draw physical images in 3D, using the 3D drawing pen. The system uses ultraviolet light to cure a resin. You can pull on it and deform it any way you wish, essentially making something like the jellyfish in the video here. The resin sticks on porous things, but not on metal. The longer you leave the UV light on, the harder it becomes. The $30 system is on sale at Toys ‘R Us. The company says this will “forever change the way you do art.” I don’t know if it’s going to do that, but it did give me a small moment when I thought, “Wow, that’s cool.”
Medium painting and sculpting in Oculus Rift
Oculus VR came up with its “paint app” in September, but I finally got some hands-on time with it at CES. I was amazed at how easy it was to sculpt objects using two virtual hands (via the Oculus Touch hand controls and Oculus Rift headset). Expressing yourself with sculpting tools isn’t easy. But sculpting in the virtual space gave me a feeling of instant gratification. I started with a blank slate. Then I selected a tool for adding clay with one of my hands. I was able to change the way that the clay shot out of the Oculus Touch wand by rotating my hand. Then I was able to smooth out the edges, spray paint it, replicate it, and delete whole sections of it using my hands in the virtual world. It really makes you feel like you are sculpting something that is real. I can imagine it will be very easy to use a 3D printer to print out the 3D creations you build. You could certainly do something like this in a video game, like Media Molecule’s upcoming Dreams game on the PlayStation 4. But in VR, you feel like you are also inside the thing you are creating. You can turn the image to view it from new angles. This is one of those experiences that could make your head explode with creativity if you’re a 3D artist or sculptor.
Parrot Disco
Parrot Disco
Above: Parrot Disco
Image Credit: Parrot
Parrot has created a unique drone that can fly for 45 minutes on a single charge and reach speeds up to 50 miles per hour. The Parrot Disco is the Swiss company’s latest entry into one of tech’s fastest-growing markets. The Disco is a flying wing that has a motor. It can fly itself or follow instructions you give it via an app. The drone can also take off and land by itself, using its own autopilot. If you use the Parrot Skycontroller, you can get a first-person view on a tablet screen of everything the drone is seeing. You don’t need any training to fly the drone, which has a range of two kilometers, and can navigate its way back to you.

Bullish on Blended Learning Clusters

Michael Horn
CONTRIBUTOR
An increasing number of regions are trying to create concentrated groups of blended-learning schools alongside education technology companies, which may be key to advancing the blended-learning field and increasing its odds of personalizing learning at scale to allow every child to be successful.

There is a theoretical underpinning for being bullish on the value these clusters could lend to the sector. These early attempts at building regional clusters mirror in many ways the clusters that Harvard professor Michael Porter has written about as having a powerful impact on the success of certain industries in certain geographies. Porter defines a cluster as a geographic concentration of interconnected companies and institutions in a particular field.

“Clusters promote both competition and cooperation,” Porter wrote in his classic Harvard Business Review article on the topic, “Clusters and the New Economics of Competition.” He goes on to note that vigorous competition is critical for a cluster to succeed, but that there must be lots of cooperation as well—“much of it vertical, involving companies in related industries and local institutions.”

The benefit of being geographically based, he writes, is that the proximity of the players and the repeated exchanges among them “fosters better coordination and trust.” The strength comes from the knowledge, relationships, and motivation that build up, which are local in nature. Indeed, new suppliers are likely to emerge within a cluster, he writes, because the “concentrated customer base” makes it easier for them to spot new market opportunities or challenges that players need help solving.

From wine and technology in California to the leather fashion industry in Italy and pharmaceuticals in New Jersey and Philadelphia, clusters have endured and been instrumental in advancing sectors even in a world where technology has reduced the importance of geography.

As Clayton Christensen has observed, clusters may be particularly important in more nascent fields—like blended learning—in which the ecosystem is still immature, performance has yet to overshoot its users’ performance demands, and how the different parts of the ecosystem fit together are still not well understood, and thus the ecosystem is highly interdependent, even as proprietary, vertically integrated firms do not—or in the case of education, often cannot—stretch across the entire value network. In this circumstance, having a cluster with organizations so close together competing and working together may be critical.

 

Perhaps the most promising blended-learning cluster is blossoming somewhat organically in Silicon Valley, where Silicon Schools Fund (where I’m a board member), the Rogers Family Foundation, and Startup Education are helping fund the creation of a critical mass of blended-learning schools and traditional venture capitalists alongside funders like Reach Capital, Owl Ventures, GSV, and Learn Capital and accelerators like ImagineK12 are helping seed an equally critical mass of education technology companies.

The NGLC Regional Funds for Breakthrough Schools, one of the supporters of the Rogers Family Foundation’s efforts in California, has funded similar regional efforts in New Orleans with New Schools for New Orleans; Washington, DC, with CityBridge Foundation; Colorado with the Colorado Education Initiative; Chicago with Leap Innovations; and New England with the New England Secondary School Consortium.

Student Question | Is Social Media Making Us More Narcissistic?

FacebookRFD-tmagArticle

 

 

 

 

 

 

Are social media like Facebook turning us into narcissists? The Times online feature Room for Debate invites knowledgeable outside contributors to discuss questions like this one as well as news events and other timely issues.

Student Opinion – The Learning NetworkStudent Opinion – The Learning Network
Questions about issues in the news for students 13 and older.

Do you spend too much time trying to be attractive and interesting to others? Are you just a little too in love with your own Instagram feed?

An essay addressing those questions was chosen by two of our Student Council members this week. Angie Shen explains why she thinks it’s important:
As the generation who grew up with social media, a reflection on narcissism is of critical importance to teenagers. What are the psychological and ethical implications of constant engagement with or obsession over social media? How does it change our relationship with others and how we see ourselves?

“Narcissism Is Increasing. So You’re Not So Special.” begins:

My teenage son recently informed me that there is an Internet quiz to test oneself for narcissism. His friend had just taken it. “How did it turn out?” I asked. “He says he did great!” my son responded. “He got the maximum score!”

When I was a child, no one outside the mental health profession talked about narcissism; people were more concerned with inadequate self-esteem, which at the time was believed to lurk behind nearly every difficulty. Like so many excesses of the 1970s, the self-love cult spun out of control and is now rampaging through our culture like Godzilla through Tokyo.

A 2010 study in the journal Social Psychological and Personality Science found that the percentage of college students exhibiting narcissistic personality traits, based on their scores on the Narcissistic Personality Inventory, a widely used diagnostic test, has increased by more than half since the early 1980s, to 30 percent. In their book “Narcissism Epidemic,” the psychology professors Jean M. Twenge and W. Keith Campbell show that narcissism has increased as quickly as obesity has since the 1980s. Even our egos are getting fat.

It has even infected our political debate. Donald Trump? “Remarkably narcissistic,” the developmental psychologist Howard Gardner told Vanity Fair magazine. I can’t say whether Mr. Trump is or isn’t a narcissist. But I do dispute the assertion that if he is, it is somehow remarkable.

This is a costly problem. While full-blown narcissists often report high levels of personal satisfaction, they create havoc and misery around them. There is overwhelming evidence linking narcissism with lower honesty and raised aggression. It’s notable for Valentine’s Day that narcissists struggle to stay committed to romantic partners, in no small part because they consider themselves superior.

The full-blown narcissist might reply, “So what?” But narcissism isn’t an either-or characteristic. It’s more of a set of progressive symptoms (like alcoholism) than an identifiable state (like diabetes). Millions of Americans exhibit symptoms, but still have a conscience and a hunger for moral improvement. At the very least, they really don’t want to be terrible people.

Students: Read the entire article, then tell us …

— Do you recognize yourself or your friends or family in any of the descriptions in this article? Are you sometimes too fixated on collecting “likes” and thinking about how others see you?

— What’s the line between “healthy self-love” that “requires being fully alive at this moment, as opposed to being virtually alive while wondering what others think,” and unhealthy narcissism? How can you stay on the healthy side of the line?

— Did you take the test? What did it tell you about yourself?

Henry Xu, another Student Council member who recommended this article, suggests these questions:

— What about Instagram, Facebook, Snapchat and other social media feeds makes them so hard to put down?

— Do you think this writer’s proposal of a “social media fast” is a viable way to combat narcissism?

— For those who aren’t as attached to social media, do challenges from an overinflated sense of self still arise? If so, from where?

— If everyone is becoming more narcissistic, does that make narcissism necessarily a bad thing?

Want to think more about these questions? The Room for Debate blog’s forum Facebook and Narcissism can help.

2015’s Best and Worst States for Teachers

Best and Worst States for Teachers
Most educators don’t pursue their profession for the money. But that doesn’t justify paying teachers any less than they deserve, considering the profound difference they make in people’s lives. In reality, however, teachers across the U.S. are shortchanged every year — their salaries consistently fail to keep up with inflation — while the law demands they produce better students.

It’s no surprise that the high turnover rate within the field has been likened to a revolving door. According to the National Center for Education Statistics, about a fifth of all newly minted public-school teachers leave their positions before the end of their first year. And nearly half of them never last more than five.

Besides inadequate compensation, other problems persist in the academic environment. Many teachers, especially novices, transfer to other schools or abandon the profession altogether “as the result of feeling overwhelmed, ineffective, and unsupported,” according to ASCD. Without good teachers who are not only paid reasonably but also treated fairly, the quality of American education is bound to suffer.

In order to help ease the process of finding the best teaching opportunities in the U.S. — and draw attention to the states needing improvement — WalletHub compared the 50 states and the District of Columbia across 13 key metrics. Our data set ranges from the median starting salary to the projected number of teachers per student by year 2022. The results of our study, as well as additional insight from experts and a detailed methodology, can be found below.

Main Findings

Overall Rank

State

‘Job Opportunity & Competition’ Rank

‘Academic & Work Environment’ Rank

1 Massachusetts 9 3
2 Virginia 2 14
3 Minnesota 3 10
4 Wyoming 4 13
5 New Jersey 20 2
6 Iowa 7 15
7 Wisconsin 13 8
8 Pennsylvania 1 22
9 Kansas 23 7
10 Maryland 12 17
11 Illinois 18 12
12 New York 5 26
13 Vermont 37 1
14 Utah 14 20
15 Kentucky 16 19
16 New Hampshire 34 6
17 North Dakota 35 5
18 Nebraska 31 11
19 Montana 29 16
20 Michigan 8 35
21 Delaware 15 30
22 Ohio 26 21
23 Indiana 11 33
24 Missouri 21 27
25 Texas 17 32
26 District of Columbia 10 46
27 Florida 25 31
28 Colorado 41 9
29 Arkansas 32 23
30 Alabama 18 40
31 Nevada 6 50
32 Idaho 24 36
33 Tennessee 33 28
34 Connecticut 48 4
35 Alaska 22 47
36 California 28 44
37 Georgia 29 45
38 Washington 39 29
39 Maine 49 18
40 Louisiana 27 49
41 Oklahoma 35 42
42 South Dakota 43 25
43 New Mexico 40 41
44 Rhode Island 46 24
45 South Carolina 38 48
46 Hawaii 44 38
47 Oregon 45 37
48 Mississippi 47 43
49 Arizona 42 51
50 North Carolina 50 34
51 West Virginia 51 39

Best States for Teachers Artwork

 

 

 

 

Readers Respond to Redesigned, and Wordier, SAT

12sat-web-master675

 

 

 

 

 

 

 

 

 

A math class at Match Charter School in Boston, which is doing a lot of test prep for the SAT. Reading passages will be harder and math problems wordier in the new test. Credit Shiho Fukada for The New York Times

 

Is it unfair to some students that the redesigned SAT, being rolled out next month, will include longer and harder reading passages and wordier math problems than before? Anemona Hartocollis’s article on the topic drew more than 900 responses from readers.

Some stressed that college admissions tests, by their very nature, should winnow out weaker readers.

“Why would you want to accept students who can’t read and write at a college level regardless of their background?” asked Ed H. of Irvine, Calif. “Instead of complaining about the idea that it is unfair to certain students, why not make sure those students are better prepared? If the poor can’t read as well as the rich, then that’s the problem that needs to be addressed.” His comment was the most recommended by other readers.
Some readers zeroed in on a sentence in the article that noted educators “fear that the revised test will penalize students who have not been exposed to a lot of reading, or who speak a different language at home — like immigrants and the poor.”

Ed Bloom, from Columbia, S.C., wrote: “I’m a reading specialist. I went nuts. Let’s not penalize people who haven’t been exposed to a lot of driving by flunking them on the driving test. Let’s not penalize the pilot of our jet liner by keeping him out of the cockpit just because he hasn’t been exposed to a lot of flying. The correct way of thinking about all of the above is not to think of it as penalizing but, instead, a need to get that person the experience. … There are lots of ways to get children ‘exposed’ to reading.”

Adam from New York wrote: “You could call it ‘penalizing students who have not been exposed to a lot of reading.’ Or you could call it ‘evaluating students’ reading skills.’”

LindaP in Boston countered with a personal story. Her son is dyslexic, and he found the SAT tough. “The comments here make my blood boil,” she wrote. “‘Who wants a kid in college who can’t read proficiently?’ ‘Prepare them better.’ ‘Perhaps these kids aren’t college material.’ Life and learning is not a straight line, and these tests take many different kinds of learners and pigeonhole each and every one of them.” Her son, she noted, is now an M.D., Ph.D. with a specialty in hematology.

A commenter under the handle R-son from Glen Allen, Va., said his stepson, who is better in math than reading, would soon be taking the test. “The new SAT will be hard for him, but he has an advantage over other students — an $800 Kaplan prep course. So it boils down to this — he’ll score better on the SAT than a lower-income student with the same abilities whose family can’t afford to fork out close to 1K to prep for and take this test. So how is this test, in any form, fair?”

A few commenters critiqued the sample of five math SAT questions that accompanied the article. Ninety-three percent of readers answered the first question in the quiz correctly; 57 percent answered the fourth question correctly. Of an algebra problem about a phone repair technician, a reader using the name Kathy, WastingTime in DC wondered, “Who gets a phone fixed these days?”

One reader, who admitted she answered only one of the five problems correctly, pointed to a question about a pear tree. Gabrielle from Los Angeles wrote: “I am a horticultural therapist who designed and built a therapeutic garden. Here’s the answer to figure out which pear tree to buy: use your relationships. Ask your friends what they’ve had success with. Call me crazy but after I left high school, I never took another math class, and it’s never held me back.”