Reflections from an EdTech Student with a CS Background
I am a 4th year Computer Science PhD student minoring in Education and I have taken two classes at Harvard’s Education school: T509 (Future of Learning at Scale) and T561 (Transforming Education Through Emerging Technologies). They have both been very informative on how practitioners of education currently use, or could use, technology. There was one thing in particular that stood out to me that was common in these classes.
The Level of Sophistication of a Technology is Independent from the Educational Goals it Achieves… or is it?
From the classes I have seen simple devices work surprisingly well, like designing bridges with pennies and paper. I have also seen surprisingly sophisticated and state-of-the-art sorts of technological solutions to helping students learn, like using computer vision algorithms to superimpose old photos onto a student’s smartphone’s camera feed while they are playing an augmented reality game to help immerse them in how a place once looked. Yet something that has surprised me is that despite the level of sophistication of the technology, the chief focus is on how it helps someone learn. To me, as a software developer, I tend to bias my excitement (and fear!) over more sophisticated technologies. I initially dismissed this as a bad thing, but I think with time I have come to realize there is a potentially deep reason for my visceral reaction.
I feel that a good analogy is to see these educational technologies like an iceberg and the surface of the water obscures the amount of sophistication required to make that technology possible leaving only the ice that is visible as the educational benefit it has. As someone with a computer science background, I tend to marvel at the iceberg as a whole. I have noticed that others, for whatever reason, do not look below the surface and instead focus on just what benefit it provides.
I think this sort of view on educational technology has its advantages. Viewing these technologies under the lens of what educational goals it accomplishes is extremely important and avoids distraction. Yet I also feel that having that narrow view of the technology might have some unexpected disadvantages. I am most concerned that not understanding the full extent of the technology and as a result not being able to navigate the issues that may come up with using them may be extremely detrimental. To abuse my analogy, just as the RMS Titanic suffered its demise from not having the appropriate sensors to avoid a large iceberg hidden mostly under the surface, one should know what technological implications (read: dangers) lurk beneath the surface of a seemingly helpful product.
Take for example a very common situation where a school district may choose to buy computers for its students’ use. Using that technology does not incur simply the initial cost of the computers. Indeed, it requires a support staff to service the computers and keep the internet connection established. This may seem obvious to many (as computers are somewhat ubiquitous), yet short-sightedness or ignorance can spell disaster for the adoption of a technology (read more about some issues districts face after adopting a school computer policy). An example of a poor choice of technology adoption has come from the somewhat recent selling of iPads previously purchased by school districts. According to The Atlantic, some districts have sold back their iPads in preference to Chromebooks as “it was far easier to manage almost 200 Chromebooks than the same number of iPads” (read more here). The hope would be a more informed original purchaser would have opted for the Chromebooks in the first place.
Another personal example I have found is with my class project developing an augmented reality game for use by middle schoolers. Since the current platform does not support an offline mode, it vastly restricts where or how it can be used (i.e., the devices must have access to the internet). This could be a deal-breaker to many potential users. Yet I can see this being an issue that might not be surfaced until after it is adopted (as it would be ignored during demos since they are usually indoors but this game happens to take place outside where wireless internet is not as easily accessible).
I think these are good examples of how the level of sophistication can affect the educational goals it achieves by simple denial of service: more complex solutions tend to be more difficult to support and as a result may end up being abandoned and thus not achieve any educational goals. Too often I feel like deployment of these technologies has been a scary afterthought (“Well we have to think very hard about how to deploy this.” Or, “we’ll need a support staff for this device”), but we rarely talk about how exactly we will accomplish that support or what that support really entails. I would have loved to have conversations about how to make technologies easier to support, etc. It would be great to see these issues brought more towards the forefront and to be a part of the design space. I worry though that many would see these types of issues (e.g., “what sorts of things are modifiable in MOOCs generally?”) as uninteresting details that get pushed aside in the interest of class time limitations.
Returning to my previous example of an offline mode for my augmented reality game, whereas the solution is technologically straightforward and would require minimal investment to support that feature, would a district know that? Or would they steer clear of that entirely without asking the question? On the flip side of understanding potential issues that come up with adopting a new technology, I worry that some educational practitioners see technological solutions as either black box or totally malleable, but there are few that know where a solution can be feasibly adjusted. On one end, if there is an issue with a product, it must be worked around or the product should be avoided entirely. On the other end, some are surprised and upset when a solution cannot be adjusted to fit a particular custom need. These two types of requests are frustrating when I have my engineering hat on, though totally understandable. To use my example of the augmented reality game I was developing, if the platform engineers knew that supporting an offline mode was (hypothetically for this example) the only thing stopping them from major adoption, then they would be upset that no one told them that and would immediately work to support that feature. Yet it is almost equally frustrating for a user who sees the product as a black box and makes, what seems to be, a minor request only to be shut down by an engineer claiming it is too difficult to implement.
My hope is that there can be more understanding on both ends on how a technology really works and how it actually helps someone learn. That way, the developers for those technologies know what is important to focus on intuitively, and that a practitioner of it can manage their expectations of them accordingly.
Concretely, it would have been nice I think for students to understand how software (or more generally technology) is developed at a high level. For example, features do not come overnight and the development cycle is quite iterative. What does it mean to hire an engineering firm to develop a product? What are the pitfalls of hiring a full-time in-house engineer and what can they realistically help with? Also getting a sense of scale between different technological solutions (Blogs can be deployed relatively easily, but what about a custom MOOC? Do I need my own web server? What is a web server?) I think coming up with an ontology of educational technologies based on their important technical properties relevant to a practitioner/policy maker/school district is really missing and would be extremely beneficial to the educational technology industry as there would then be a common language with cross-cutting concerns between educational objectives and their technical implications.