Translate

Tuesday, November 7, 2017

Crowdsourcing and Evaluation: Addressing Issues of Security


I look forward to talking to AECT colleagues on Thursday, 11/9/17 about accommodating crowdsourced data in social research and program evaluation. This conversation complements my future research agenda on using specialized crowds for e-learning evaluation.

Steve King smiling in black turtleneck shirt
Steve King aka Dr. Security
For issues about managing data and cybersecurity, I typically turn to the expertise of Steve King, Netswitch COO/CTO.  Learn more about Steve and read what I learned about security and engaging crowds in e-learning evaluation. Steve is happy to advise evaluators on issues of architectural requirements and cloud data security. Reach out to him on Twitter and LinkedIn @sking1145.

Platform Considerations for Human Intelligence Tasks
A secured Web conferencing platform or a MOOC separate from a school’s network should support crowdsourced evaluation activities. A secure web-conferencing platform that imposes a bunch of restrictions is necessary to assure there is no easy avenue to compromise crowd evaluation tasks.
Assuming the crowd will be operating on an external host (cloud provider like AWS or Azure), the platform ought to be able to offer several services. Cloud providers typically offer content monitoring services so that up and downloads can be erased/destroyed at discretion. They also offer meta-data backup of the event itself so key information can be retained without leaving the crowd initiator or evaluator vulnerable to actual content being compromised.

Data Processing Considerations
The cloud service provider should be able to process large volumes of high velocity, structured (spreadsheets, databases) and unstructured (videos, images, audio) information as well as secure all of it. Even if an evaluator choses to run evaluation tasks in a local private cloud, any provider will offer this level of security.
But it is CRUCIAL to separate the cloud from the school’s network so that the crowd members can’t gain access to the school’s information or administrative network. If an evaluator can only run off the school’s system/network itself, she will be forced to set-up a sub-net at least and buy some hardware which could get more expensive than running the whole thing in a cloud. I would NOT advise an evaluator to try the latter option.

Credentials Considerations
In order to have a clearly defined crowd, crowd initiators must assign every member a credential and use whatever vetting process the evaluator would normally use to determine their appropriateness for participation. A SSN or EIN number matched with key locator information or student ID that can be validated could work.

Other Key Platform, Data, and Credential Considerations:
The Federal Risk and Authorization Management Program (FedRAMP), while specifically related to government security requirements, provides compliance guidelines that will guarantee a secure web conferencing platform.
FedRAMP standards meet the baseline security controls set out by the National Institute of Standards.
A web conferencing platform should be FedRAMP compliant, and if not, should employ a layered security model of some sort with the following characteristics:
·         Gated access is one important characteristic. 
Gated access refers to the security options that manage entrance to and usage of virtual rooms employed by a web conferencing platform.  Gated access also helps prevent DDoS attacks
·         Platform, data, and credential restriction settings are important.
The ability to set restrictions on the hours a virtual room can be accessed minimizes the time in which sensitive and vulnerable information can be viewed and compromised. Crowdsource HITs initiators will want to be able to monitor remote users. The platform should allow the encryption of all information in transit. It should also allow session locks and the ability to terminate credentials so the crowdsource initiator can manage the platform access. Session locks allow evaluators to control who can enter a room at what time. Credential termination should be both manual and automatic based on a member leaving a room or meeting space where their credentials will no longer work for re-entry without a new login.
The platform should also let evaluators encrypt the event recordings, both at rest and in transit so that when a crowd initiator shares the contents with evaluation members unable to participate synchronously, only authorized members will have the encryption keys that will allow them access.
Initiators will want to be able to control and define roles and access privileges for the crowd which will establish the specific conditions by which a member can interact with a group or room. Role-based access control and dynamic privilege management are keys to this next layer. It is the evaluator’s ultimate control over who gets to enter which rooms, so she can decide that a member who needs to share information with a key group but should not be allowed direct access to that group can be assigned to a sub-conference room where a primary conference member can meet them, gain the information and return to the primary meeting.
Dynamic privilege management allows an evaluator to enable the retention of a member’s virtual identity while suspending their access privileges, so a member could have their privileges upgraded temporarily for a one-time event and then returned to their prior status. This could also facilitate evaluation requirements for working on individual or group e-learning tasks, and protecting small human intelligence tasks or HITs.

The conferencing platform should also allow a way to pair a person with unique authenticators that can customize their privileges.  This is done through individual access codes which are essentially the member’s fingerprints. Depending upon the privileges granted, individual access controls keep track of access rules and determine which sessions each member will be allowed to enter. This comes in handy for the identification of suspects following an information leak or inappropriate sharing of sensitive material.

Monday, September 11, 2017

Is there a Viable Role for Crowdsourced Data in Evaluation Studies? Steve’s Thoughts

Dr. Steve Ehrmann
I’m starting to engage researchers in a conversation about crowdsourced data. In more formal ways, I started with the publication of a chapter in Revolutionizing Modern Education through Meaningful E-learning (Khan, B. ed., 2016). I will continue in Jacksonville, at a Research & Theory AECT round table. Over the next four months, I’ll also post the thoughts of various instructional design or distance learning researchers, leaders, and practitioners on the potential role in crowdsourced data. We’ll look at e-learning, innovation, and technology program evaluations in particular. There may be some who have some experience and feedback. I’m looking forward to the exchanges.

For our first installment, we’ll hear from Dr. Steve Ehrmann. Steve and I met when he was at George Washington University and later we exchanged ideas around a couple of redesign initiatives that started with his time at the Kirwan Center for Academic Innovation at the University System of Maryland and continue until today.
Steve shared thoughts below about how we might approach crowdsourcing and innovation. First take a look at his unpublished paper attached here and his description of the uniform impact and unique uses perspectives (pp. 9-11).  
The uniform impact perspective focuses on the same outcome for each user of the program (how well did they learn X, for example).  In contrast, the unique uses perspective assumes that each user may interpret the program differently, use it differently, and experience different results; this perspective assumes that results differ qualitatively from one user to the next. 
Crowdsourcing would be especially appropriate for a unique uses study of an innovation.  An evaluator or researcher might take the following steps:
1. Identify a crowd of users and figure out how to reach them online. 
2. Explain to them why it's in their interest to contribute their time to your inquiry (i.e., to respond thoughtfully to your message).  Intrinsically motivating your crowd produces more valid feedback than extrinsic rewards (e.g., entering them in a lottery if they contribute.) 
3. Ask each user to consider what's been most valuable for them about their use of the program.  What's been most burdensome, frustrating, or limiting about their use?  Explain that you need the crowd to produce a responses that are (a) each quite important to the person involved and (b) qualitatively different from other benefits and problems suggested by others.  (In 1990, if you were studying uses of word processing with personal computers, the first answers might have to do with the benefits of multiple fonts and the ease of editing. But, after enough brainstorming, eventually someone might mention that their whole approach to rethinking has changed because rewriting is so easy.)

4.  Starting with these two long lists, begin a second round of inquiry about each item. Are there patterns? Are there connections? Do they suggest new ways of understanding the program itself?

The closest I’ve seen to this approach occurred in a face-to-face discussion of perhaps 20 first generation users of an educational innovation: the use of chat rooms by students in f2f composition courses (as I recall, this was in the early 1990s). The classes met in computer labs. Instead of talking, faculty and students would type to one another.  A couple of months into the first term, faculty from perhaps ten institutions met to discuss what was happening in their courses. The first 45 minutes focused on technical issues and on what the faculty liked.  Then one faculty member, ashamed, admitted that students in his course had erupted into a barrage of profanity and obscenity.  A long pause. Then two or three others said something similar had happened to them, too.  Cutting to the end of the discussion, one faculty member remarked, “Think about the French revolution. Think about what happens when powerless people get power. Some windows get broken. But they’re investing energy into writing. That’s what you need most in a writing class. So the trick is not to crush the revolution but to figure out how to channel the energy!”  Later conversations revealed two additional, different ways to interpret this innovation, each with different insights about how to make more intentional, effective use of such chat rooms. 
Michael Scriven calls this a goal-free evaluation. What I’d emphasize is creating a process through which users can provide quite different pictures of what the program is for, how it can be used, what benefits can be created and at what cost.   Those perspectives may be incompatible with each other and at least some may come as a surprise to the people who created the program.


Thank you, Steve, for starting off our new discussion!

Sunday, September 3, 2017

Fall Series: Evaluation and the Wisdom of the Crowd

New opportunities exist for including stakeholders' and others' input in evaluation research. Can you imagine the nature of the improvements we might front load in e-learning and instructional designs if we're able to incorporate qualitative data seamlessly into the decision making process? I'd like to explore those possibilities this fall. 

Crowdsourcing involves gathering and processing large volumes of high velocity, structured (spreadsheets, databases) and unstructured (videos, images, audio) information. Qualitative research also involves large data sets.  We can now quickly process large data sets in ways we could not in the past. For example, some helpful applications for possible qualitative research and e-learning program evaluation include “big data” techniques for information retrieval (IR), audio analytics, and video analytics (Gandomi & Haider, 2014). The techniques to acquire, clean, aggregate, represent, and analyze data are many and help justify the re-conceptualization of our evaluation paradigms and models--especially our interpretivist and postmodernist ones. Should we use crowdsourced data? If so, under what conditions given some of the challenges of internet security and crowds. I've done some research and thinking about this over the last year. Let's talk.

Defining and Framing

Estelles-Arolas & González-Ladrón-de-Guevara’s 2012 literature review on an integrated definition of crowdsourcing included 10 with a problem-solving purpose. Their proposed definition is as follows: 

Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number,via a flexible open call, the voluntary undertaking of a task.These should be useful to our discussion since the use of evaluation data in this context facilitates problem solving.


Crowdsourcing can change the way evaluators work, giving them access to just-in-time assistance in performing evaluation tasks. Researchers and practitioners will need more information to help determine if existing paradigms, approaches, and methods should be reassessed to accommodate crowdsourcing. Evaluators should also think carefully about what tasks might be appropriate for various program evaluation approaches, given some of the problems with crowdsourcing.

Additional questions to frame this blog feature: Does e-learning program evaluation need to develop its own definition of crowdsourcing, or merely validate Estelles-Arolas and González-Ladrón-de-Guevara’s (2012) definition? Do members of the American Evaluation Association task force want to consider conducting a needs-assessment for whether crowdsourced input will impact its guiding principles for evaluators (American Evaluation Association, 1995)?

Sources cited:
Estelles Arolas, E, & González-Ladrón-de-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189-200.

Gandomi, A., & Haider, M. (2014). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 137-144.

Tuesday, August 29, 2017

Evaluating Innovation and the Wisdom of the Crowd

I've had the privilege of collecting the thoughts from various people on this blog about how we might approach evaluating innovation. Many regard e-learning as disruptive with the potential to innovate teaching and learning. This year, I will start a new series on the potential role of crowd sourced data in evaluation: innovation, instructional design, e-learning, and technology program. As before, I'm looking for candidates to interview over the next five to six months. Please send me nominations.

In the meantime, I'm looking forward to a discussion this fall with NATO E-Learning (August) and AECT conference (November) goers. I posit that there may be opportunities for crowd data to inform our instructional designs. The wisdom of a defined crowd can be beneficial during instructional design and redesign processes. For organizations such as NATO that has tremendous human capital, the crowd can help its members solve their unique design problems and make decisions about the unknown or unfamiliar in ways essential to their goals of promoting stability, security, and prosperity. I maintain Given the complexity of developing programs, services, policies, and support for e-learning, leaders may find it challenging to regularly evaluate programs to improve quality. It's worth a conversation in a world of lifelong learners and MOOCs. 


Do you agree? Let's talk in Jacksonville, AECT. Others, let's talk here. Read more about my thoughts on the topic in my latest book chapter Massive Open Program Evaluation: Crowdsourcing’s Potential to Improve E-Learning Quality in the book pictured below.

Friday, May 22, 2015

Program Evaluation Resources




I haven't fallen off of the face of the evaluation map or of social media. My activity level needed to decrease. Not only are we finalizing the details of our move from PA to VA, but I've been working also in the building you see in this picture. What is she doing? I'm writing a book chapter: Massive Open Evaluation: The Potential Role of Crowd Sourced Input to Improve E-learning.

During this journey, I've come across several valuable resources. Two were passed on to me by Tom Reeves, PhD Professor Emeritus of Learning, Design, and Technology, UGA:


Here's Anita Baker's Evaluation Services website. It has excellent resources and tools.

Have a great summer!