Despite all the promises of the World Wide Web as a platform for online learning, there are still many questions about whether or not we are building learning tools and learning communities that welcome and support a diversity of ideas and people. How do stereotypes affect the ways in which technologies are developed and implemented? Here, P2PU Learning Lead Vanessa Gennarelli argues that we must take what we know about gender and performance in order to design assessments that open rather than close opportunities.
Nearly 25 years ago, Sherry Turkle and Seymour Papert offered us another way to recognize how people learn. Their seminal piece “Epistemological Pluralism” walked through how different kinds of thinkers—artists, poets, musicians—would pick up pieces of programming knowledge, map these slices to things they already knew (like blank verse) and assemble their own novel creations. Unlike a linear approach to programming, which may appeal to learners who are predominantly male, Turkle and Papert encouraged us to consider a “bricoleur” style: learning by association, discussion with the materials, watching interactions and iterating upon them.
Turkle and Papert outlined a glowing promise for the future of assessment—one that included different voices of creativity, specifically representing the learning pathways of women.
As educators, we have not delivered on their vision.
At Peer 2 Peer University we’ve thought extensively on the current state (and future of) assessment on the web. This piece will look at assessment and gender, specifically in terms of assignment types, interfaces, and priming. We’ll critique a few examples of current assessment practices and trends in online learning. From there, we’ll recommend a path to the future of assessment that is diverse, inclusive, and recognizes multiple paths to the “answer.” Because that is the goal of innovation in this space. Not making assessment more manageable. Or necessarily easier. The goal is to nurture a creative society of engaged citizens who think uniquely and engage collaboratively.
If we hold on to competition and multiple choice, we surrender our chances at that goal.
Gender, Performance and Mental Models
In situations of cognitive stress, such as a programming exercise or a math test, women and girls may experience pressure to overcome traits associated with their gender. This anxiety, commonly called stereotype threat, can inhibit the performance of those in the minority group (Steele, Spencer & Aronson, 2002). If you’re female and have ever walked into a room of all males to give a technical talk, you’re probably familiar with the sensation.
We want learners to perform at the best level of their ability. So, let’s see how stereotype threat appears in assessment and web tools, and from there discuss what we can do about it. We’ll start with how priming affects our thinking.
Priming refers to a set of expectations and stereotypes you might carry out about your social group, often unbeknownst to you. For instance, if you’re elderly and your age is suggested, you might start to act in keeping with those expectations. It’s a sinister beast.
Researchers Jennifer Steele and Nalini Ambady found that even checking a box with your gender before a test can hamper performance (2006). Let’s reflect on that for a moment. Almost every survey you’ve ever taken and every sign-up form has this question—and it has a chilling effect on performance.
Let’s take a step back to think about priming in the context of the the ed-tech landscape. How many educational tools in the Apple Store are marketed specifically at girls? How about gender questions in signup forms?
One way to stem stereotype threat is to give female learners a sense that they aren’t alone. Michael Inzlicht and Talia Ben-Zeev found that the ratio of males to females in a classroom may also trigger stereotype threat (2000). Further, a more masculine or traditionally “geeky” environment can have similarly negative effects (Cheryan et al., 2009).
I’ve been impressed with products and platforms that pay mind to issues of priming and gender. One of those is DIY.org, a project-based learning platform for kids. DIY.org is an umbrella for smaller subcommunities, like botany and beekeeping. In the sign-up flow, a user is prompted to select from a gallery of mostly gender-neutral avatars.
DIY.org is an example of using smarter defaults to bolster performance. And in online environments, the interfaces go a long way towards prompting interactions and feedback amongst learners.
The Interface is the Message
A few months ago I came across David A. Bank’s stellar article “Very Serious Populists” in the New Inquiry. In deconstructing the interfaces of various online communities, he pointed the following graphic:
Take note of the platforms on the top and the bottom of the graph. Pinterest, Goodreads, Tumblr, and Twitter feature interface gestures of support and quality via sharing: repinning, retweeting, reblogging. Reddit, Stack Overflow and Hacker News feature interface gestures of competition: “up voting” and “down voting.”
It’s nearly impossible to separate content from platform (Pinterest with its bevy of food photography and Stack Overflow with its wealth of technical information appeal to different audiences) or to prove causation. Nonetheless a curious trend exists amongst this user data. Does a competitive interface alienate women?
If we look to the research on gender and competition, the answer is a resounding yes—and the findings are sobering. Muriel Niederle and Lise Vesterlund found that men are twice as likely to enter a tournament as women (2007). So before females even get into the room at a competition, men are more likely to outnumber them. Even the most talented women are reluctant—Niederle and Vesterlund found that women in the highest-performing quartile were less likely to enter than men in the lowest-performing quartile.
Scholarship in this area is nascent, and Niederle and Vesterlund’s paper is recent. But the research on whether competition is effective at all is well-documented. Alfie Kohn’s recent book No Contest points to a meta-analysis of 128 studies on competition versus cooperation from 1924 to 1980 (Johnson, date needed). Amongst the findings: “65 studies found that cooperation promotes higher achievement than competition, 8 found the reverse, and 36 found no statistically significant difference.”
Even if competition possibly alienates women, it doesn’t produce better results for anyone.
If we look at the education landscape, we can see evidently that “gamification” has taken hold. While it’s entirely possible to design cooperative games—I look to alternate reality games in particular—in the ed-tech world, competition has become commonplace. Points, leaderboards and winning are frequently used to spark engagement (how does one win at learning?).
But how do we assess cooperative learning and collaboration? How do we create interfaces that nurture networked learning?
I recommend the following design principles:
- Cooperation-based user interfaces—sharing, favoriting, trees of influence
- Smarter defaults—gender-neutral prompts, language and imagery, invitations to participate
- Project-based assessments—rubrics, portfolios, mechanisms that focus on qualitative feedback
The Scratch team at the MIT Media Lab holds collaboration as a core tenet for their platform. As young “Scratchers” create their own design projects, they are encouraged to share them with the community for feedback. Many of Scratch’s design choices nurture cooperation, among them “remix trees” which show how many Scratchers have used a project for inspiration.
These trees show how projects are interconnected. Students’ projects are never 100% “theirs”–they always possess the potential to be adapted and built upon, and stand upon the shoulders of something else. Making reusable projects is an important 21st century skill–remixing trees “measure” that impact.
In the Peer 2 Peer University community, Dave Cormier recently launched the “Rhizomatic Learning: The Community is the Curriculum” course. Cormier challenged learners to create their learning paths together by connecting to each other. In order to “measure” the connectedness of the community, he used Martin Hawksey’s TAGSExplorer tool to visualize their network at a high level:
From there, Cormier prompted learners to see singletons in the community network and reach out to them. Collaboration and cooperation was baked into the course structure. Learners could see the impact they were having on the community, in real time, as their knowledge network grew.
But the interface itself is only one part of the equation—the other is crafting assignments to produce diverse ideas. Let’s take a look at the kinds of online assignments that would suit.
Assignments that Bind
Focus on themes. Instead of encouraging learners to compete to solve a problem, select a thematic focus for a project (Resnick 2013). Organizations like StoryHack create thematic “hackathons” where folks come together around certain ideas or riff on a site for inspiration. Instead of working deductively towards a “correct” solution, learners are free to follow their personal passions. In a more formal setting, the Science Leadership Academy in Philadelphia anchors each year with a theme: identity, systems, change and creation. The assignments map onto those themes which prompt learners to create their own understanding of the concepts.
Highlight diverse examples. Showcasing a range of work inspires learners to imagine what’s possible (as opposed to what’s correct). At the same time, an array of different projects models inclusivity, builds a broad community of learners. For instance, the homepage for the 3-D printing community Shapeways features projects that range from an egg-cup to accessories to bling out your Google Glass.
Use remixing as an assignment. Starting with a piece of music, art, poetry or code and prompting learners to remix offers an infinite number of interpretations on one original piece. For P2PU’s recent Play With Your Music project, hundreds of folks remixed their own version of “Air Traffic Control” by Clara Barry. We saw everything from trip hop to echoey reverb. At the same time, since the learners had spent so much time with the original piece, they were able to critique the subtle departures their various peers made.
Studio critique is a method we’re excited to see in the future of online learning. While calibrated peer review (review 5 other people, then review yourself) is an idea that’s been around for awhile, only recently has collaborative video been so easy. Peers reviewing each other—in real time and in their own voices—imparts agency to each learner in the group. Google help-outs are already edging in this direction, and I can’t wait to see more.
You’ll notice that these assignments are open-ended, subjective, creative and equalizing. The assessments (rubrics, badges, or reviews, for example) should be as well.
What’s At Stake
At this juncture, large-scale MOOCs and e-learning platforms boast “personalized learning experiences” While self-study and empowerment are admirable goals, usually “personalized” means a student working alone on problem sets (even if those problem sets are specifically designed for them). The danger is that online learning will become a sea of student silos, all in search of the right answer.
But real-life problems often have dozens of potential solutions. As long as there is one right answer, we’re competing for a scarce amount of knowledge. Though it is tricky, we must build assignments and interfaces that recognize an array of projects. If we recognize many paths to the answer, we’re cultivating a diversity of ideas and people. Or, put another way, pluralism.
Banks, D. (2013, December 20). Very Serious Populists. The New Inquiry Very Serious Populists Comments. Retrieved May 5, 2014, from http://thenewinquiry.com/essays/very-serious-populists/
Cheryan, S., Plaut, V. C., Davies, P. G., & Steele, C. M. (2009). Ambient belonging: how stereotypical cues impact gender participation in computer science. Journal of personality and social psychology, 97(6), 1045. http://depts.washington.edu/sibl/Publications/Cheryan,%20Plaut,%20Davies,%20%26%20Steele%20(2009).pdf
Johnson, D. W., Maruyama, G., Johnson, R., Nelson, D., & Skon, L. (1981). Effects of cooperative, competitive, and individualistic goal structures on achievement: A meta-analysis. Psychological bulletin, 89(1), 47.
Fine, Cordelia (2010). Delusions of Gender: How Our Minds, Society, and Neurosexism Create Difference . W. W. Norton & Company.
Inzlicht, M., & Ben-Zeev, T. (2000). A threatening intellectual environment: Why females are susceptible to experiencing problem-solving deficits in the presence of males. Psychological Science, 11(5), 365– 371. http://gentryd.people.cofc.edu/womenmathfactorial.pdf
Kohn, Alfie (2013). No Contest: The Case Against Competition. Houghton Mifflin Harcourt.
Niederle, M., & Vesterlund, L. (2007). Do women shy away from competition? Do men compete too much?. The Quarterly Journal of Economics, 122(3), 1067-1101. http://www.international.ucla.edu/cms/files/niederle.pdf
Resnick, M., & Rosenbaum, E. (2013). Designing for Tinkerability. http://web.media.mit.edu/~mres/papers/designing-for-tinkerability.pdf
Steele, C. M., Spencer, S. J., & Aronson, J. (2002). Contending with group image: The psychology of stereotype and social identity threat. Advances in experimental social psychology, 34, 379-440.
Turkle, S., & Papert, S. (1992). Epistemological pluralism. J of Mathematical Behavior, 11(1), 3-33. http://papert.org/articles/EpistemologicalPluralism.html
Thanks for their feedback and input on the piece: Zac Chase, Beth Reddy, Dirk Uys, Aidan Feldman, Jonathan Gottfried, Dr. Alex Ruthmann, Rob Spectre, Jessy Kate Schingler. Image credits: Clemens Vogelsang