2015-2016 Sherman Graduate Fellowship

The Lewis & Ruth Sherman Centre for Digital Scholarship is soliciting applications for a 2015-2016 graduate fellowship in digital scholarship. As digital scholarship (DS) becomes more prominent in academic research, the Sherman Centre’s role is to support members of the McMaster community as they experiment and integrate DS tools and practices into both projects and teaching. The term “digital scholarship” encompasses a diverse set of practices throughout the humanities, social, and hard sciences — to learn more about our version of digital scholarship, see our statement on our site.

The ideal candidate for the fellowship will be a graduate student who seeks to help drive the evolution of their discipline by applying the tools and methodologies of digital scholarship to their research. Also critical is a willingness to learn about  the collaborative and multidisciplinary nature of digital scholarship and to engage others both within and without the discipline in their projects. In order to create a focused learning experience over the course of the fellowship, we ask applicants to propose a specific idea for a digital scholarship project that they will work on over the course of the year. This project could be part of a sandwich thesis or otherwise complementary to the student’s dissertation research.

To see current fellowship projects as well as other work happening around the Sherman Centre, please visit our project page. The 2014 fellows were Mark Belan, Chris Handy, and Jeremy Parsons.

The Sherman Centre for Digital Scholarship received designation as a McMaster University Research Centre in 2012. It is located on the first floor of Mills Library and includes a presentation space, three public high-powered workstations that provide access to a wide range of digital scholarship tools, and office space for researchers. Beyond space, the Sherman Centre offers a range of staff to support research projects, as well as a robust yet flexible technical infrastructure that fellows and other researchers may utilize.

The fellowship runs from September 1, 2015 until August 31, 2016 and offers these benefits:

  • office space in the Lewis & Ruth Sherman Centre for Digital Scholarship
  • technical and project consulting, both via its own staff, as well as from other Library units, e.g.- Lyons New Media Centre or Maps, Data, GIS
  • technical infrastructure
  • a $1,500 stipend

Expectations and deliverables:

  • a presentation for the monthly Sherman Center Colloquium (20-30 minutes)
  • posts on the Sherman Centre blog (minimum two per term) on project updates and/or related digital scholarship issues. These posts will be revised into a written report at the conclusion of the fellowship that details project outcomes, making specific reference to the role(s) played by the Sherman Centre
  • creation of a visualization of an aspect of their work to display on the Sherman Centre multimedia entryway
  • physical presence in the centre and participation in its activities


  • current or accepted graduate student at McMaster University (open to all faculties)
  • not previously a Sherman fellow

Applicants should submit a letter of intent outlining their project and how it would benefit from the fellowship, along with a CV and a list of three references to Dale Askey, Administrative Director of the Sherman Centre (askeyd@mcmaster.ca).

We will be holding two information sessions for interested graduate students:

  • Thursday, April 30, 1:30-2:00 (following the Sherman Colloquium from 12:30-1:30)
  • Wednesday, May 13, 3:00-3:30

Deadline: Friday, May 22nd

Posted in Blog

Interview with Micki Kaufman

As larger archives of human cultural output are accumulated, historians face a deluge of information. Where scarcity of information was once a common frustration, historians now face the opposite problem. Amidst veritable haystacks, historians must locate the needles and, presumably, use them to stitch together a valid historical interpretation. To manage this information overload, historians are beginning to employ digital techniques. Indeed, a wide range of computational tools and methods now enable historians to conduct research at a scale once thought impossible. For example, Micki Kaufman, a doctoral candidate in US History at the Graduate Centre of the City University of New York (CUNY), employs computational text analysis techniques to study the Digital National Security Archive’s Kissinger Collection. This collection is comprised of approximately 17,500 meeting memoranda (memcons) and teleconference transcripts (telcons) detailing Henry Kissinger’s correspondence between 1969 and 1977.

The Kissinger Collection, as a large-scale online resource, presents both an opportunity and challenge for historians. Having this large volume of information available online is undoubtedly valuable, but the restrictions of the web-based search interference render it of limited use for historians. The application of more sophisticated computational techniques, then, permits a comprehensive analysis of the Kissinger Collection and facilitates meaningful historical interpretation. In order to understand the benefits and pitfalls of Digital Humanities, I recently interviewed Kaufman about her research, her advice for new digital humanists, and her views on the future of the field.

To read more about Kaufman’s research, please visit her blog: “‘Everything on Paper Will Be Used Against Me’: Quantifying Kissinger.”


SW: Increasingly, archives are making (portions of) their collections available online. This enables scholars to use digital techniques to ask new questions of large-scale historical data, like the Kissinger Collection. Your research emphasizes text analysis – word frequency and collocation, topic modeling, influence mapping, and sentiment analysis – and visualization – force directed graphs, line and bar graphs, and area and stream graphs. Why did you decide to utilize digital techniques for analyzing the Kissinger Collection, and how did you develop your methodology?

MK: Most fundamentally, the problem I encountered in the analysis was one of scale. The amount of material generated during Henry Kissinger’s White House years (1969-1977) is vast. This posed lots of problems for this project – even confining my particular research to his official correspondence materials curated by the National Security Archive (NSArchive) at George Washington University, I had a dizzying 18,600+ documents to analyze, involving thousands of individuals, organizations, and subjects in the discussions. At the same time, the lack of available material was also a problem – the material at the National Security Archive is a declassified subset of the total amount of correspondence generated and collected. So, in light of the problems of scale, I chose to confront the technological challenges of distant reading, rather than the more typical problem of how (and where) to choose a subjective starting point for a conventional close reading.

In addition to scale, any research based upon Henry Kissinger’s publicly available correspondence is complicated by a dense and contentious historiography generated by a host of historians in the past 40 years. Kissinger and his geopolitical impact are hotly debated, not merely for the controversies of his tenure but for the controversies of his character. He is a man whose history is defined by a paradoxical blend of policy, personality, celebrity, and secrecy. To study a man of such complexity and impact on the basis of selected, cherry-picked items of evidence is therefore to run the same risk former Nixon Domestic Policy Advisor John Ehrlichman warned about, that a few snippets of ‘tape’ would only create an oversimplified, and therefore wrong, impression. Only by studying ‘all’ the tapes, and ‘all the archives’ could one form a picture that would properly reflect Kissinger’s deeply complex, and internally contradictory qualities.

The methodology chosen for the project reflects the nature of the primary sources under analysis. In this case, to represent such a vast and complex archive, and to do so from the ‘top-down,’ required an approach along the lines of what Jo Guldi and David Armitage described in “The History Manifesto.” A study of the archive en masse recommended a computational analysis of the text, data and metadata that the archive comprises. The results of such data sets were best understood with data visualization – and the choices for how those visualizations would be designed and deployed was intimately based upon the kinds of patterns and questions they evoked and revealed.

SW: For scholars just starting out in the Digital Humanities, the field can seem somewhat daunting. What skills would you recommend as a starting point to help navigate the growing overlap between the humanities and technology? What skills did you find most useful when you were beginning your own research?

MK: More than anything else, it requires that the scholar view the computer as anything else in their research environment – subject to inquiry and modification. My view of Digital Humanities is that it reflects an inbuilt ethic of willing (even brave) deconstruction and reconstruction, recreation and reinterpretation – and this includes the Digital as much as the Humanistic. As with any other aspect of the research process, the tools and methods need to be interrogated, analyzed, criticized, modified and/or replaced in order to understand whether the scholarly interpretation of the results they generate and demonstrate is of any lasting value. So, learn what the computer is, how it works. What is an operating system? What is the difference between Mac, PC and others? Why are they different? How are they the same? What is a file? What is the difference between a text file and an Excel file, for example? How are they the same? These may seem like basic questions, but they bear immense relevance to being able to solve scholarly questions without jettisoning the scholarly mindset.

Once you feel pretty good about the computer and how information is saved, stored, processed, and displayed in the course of your everyday use, you can then begin to ask about how the computer is deployed in the scholarship. Which digital scholars use what software or operating system? What files and what tools are they using? Why? It may seem very basic, but the most useful skill I found, more than anything else at this point, was how to connect various tools and technologies using these basic common denominators (file types, etc.) – since there was (and there remains) no single ‘One Size Fits All’ tool or platform for digital scholarship. Such a basic understanding of these components helps one to transform or repurpose the information from one process or tool into the form compatible with whatever tool is necessary to take the research process to the next step.

SW: As those working in the Digital Humanities likely know, it is not uncommon to encounter roadblocks during the course of a project. Some roadblocks require only a minor detour, while others necessitate an entirely new route. Have you ever encountered such a roadblock? How did you deal with it, and what advice can you offer to others going through the same experience?

MK: My research is all about roadblocks. I would consider a large part of the work of the digital historian to be the creation of ‘road(un)blocks.’ The good ones are the ones you can share, and genericize enough as best practices that can be made available to others. The advice I would have to those confronting such seemingly irreconcilable obstacles is to consider them opportunities for learning wrapped in frustration. In those moments, step away from the tech and sketch what you see in your head (literally, or otherwise). Focus on the bits and pieces and weave them together in your mind. In this kind of exercise, one can often find that a core aspect of what the ‘right brain’ serves up contains the germ of a new approach. At the very least, document and share the roadblock.

I have found that about 75% of the limitations I have encountered in Digital Humanities were my own limitations, either conceptually or practically, and thus represented opportunities to grow my skills and abilities. Another 20% represented obstacles posed by absent or unavailable technology – machines and code that either didn’t exist yet or didn’t exist in the form I needed it to (some of which also was within my power to create or learn). The last 5% or so was stuff I had to document and abandon, and that is often some of the most provocative stuff (stuff that I either couldn’t afford, couldn’t approximate, or wasn’t out yet). Most of all, don’t ever let obstacles like these dilute one’s passion for the ideas or the work – for when you are confronted by these obstacles, you may well be at the threshold of great things. And as an historian, it is a great honor and privilege to struggle with such questions in the effort to expand knowledge.

SW: Overall, how has your work with text analysis and visualization shaped your dissertation? The field of Digital Humanities is growing rapidly; however, there are some who question the validity of the field. How can grad students incorporate digital techniques into their dissertations, theses, and major research projects? Do you envision the future of Digital Humanities as solely a set of tools for supporting traditional research (e.g. close-reading), or do you think it can stand as a final product on its own?

MK: The methodological choices I have made throughout have had a huge impact on the course of the research for a number of reasons. First of all, given the complexities posed by the denseness and scale of the material and its selective declassification, the tools were arguably one of the only ways in which the work could have been approached at all comprehensively on the timetable of a doctoral dissertation. Second, the use of these techniques are an existential necessity for the dissertation to avoid ‘cherry picking’ and facilitate new interpretations and demonstrations of some of the unique benefits of Digital Humanities methods.  Last, the use of visualization, in particular, has affected how I construct my arguments – while text contains narrative arguments articulated linearly, visualizations are non-linear, revealing patterns and trends that narrative argumentation can sometimes struggle to adequately articulate.

Digital Humanities is far beyond tools, in my opinion. Historical interpretation using digital tools is a different process than traditional close reading, but today’s Google-enabled traditional close reading is a different process than the archival practice of days past. Someday soon, Digital Humanities will once again be Humanities. Our use of text analysis, network and data visualization, geospatial mapping and other approaches will become an assumed and comfortable part of the research landscape. As always, scholars who brave the journey must be ready with a pioneering spirit to overcome the obstacles that come with the territory, digital or otherwise. If scholars can sustain and nourish their historical empathy while overcoming the challenges of a constantly evolving technological landscape, Digital Humanities (and digital history) will continue to thrive.


Micki Kaufman is a doctoral candidate in US History at the Graduate Center of the City University of New York (CUNY). Her dissertation, “‘Everything on Paper Will Be Used Against Me:’ Quantifying Kissinger” is a seven-time winner of the CUNY Graduate Center’s Provost’s Digital Innovation Grant. She is a co-author of “General, I Have Fought Just As Many Nuclear Wars As You Have,” published in the December 2012 American Historical Review. In 2015, Kaufman was awarded the ACH and ADHO’s Lisa Lena and Paul Fortier Prizes for best Digital Humanities paper worldwide by an emerging scholar. From 2015-2017 she served as a Virtual Fellow with the Office of the Historian at the US State Department, is currently a Biography Dissertation Fellow at the Leon Levy Center, and serves as an elected member of the Executive Council of the Association for Computers in the Humanities (ACH).

Posted in Blog

W.F. Albright Institute of Archaeological Research

2017-2018 Sherman Graduate Resident Michael Brooks Johnson, a doctoral candidate in the Department of Religious Studies at McMaster University and a George A. Barton Fellow at the W.F. Albright Institute of Archaeological Research, will be hosting a workshop on Thursday, January 25th at 4:00 pm at the Albright Institute. For those unable to attend  but would like to learn more about Michael’s innovative research,  you can do so by viewing the workshop live here: https://www.youtube.com/watch?v=rW_ofhJ-1Hk

A colorized visualization of the rolled reconstruction of 1QM, courtesy of Michael Brooks Johnson

Wrapping Up 1QM: A Scrolling Three-Dimensional Model of the War Scroll

This workshop presents a scrollable digital reconstruction of 1QM, the War Scroll from Cave 1 at Qumran. The majority of the Dead Sea Scrolls require some degree of material reconstruction—the arrangement of the remains of a manuscript in what are proposed to be their original places. One of the techniques for reconstructing scrolls, the Stegemann Method, named after Hartmut Stegemann, involves reconstructing a manuscript with reference to the patterns of damage that were incurred by a scroll while it was rolled. The principle of repeating patterns of damage can be used to calculate the distance between fragments. One of the challenges in this kind of reconstruction is that the perception of damages is difficult to visualize in a conventional edition. Johnson will explain how digital modeling can supplement arguments for the material reconstruction of scrolls by demonstrating how patterns of damage align when the reconstruction is rolled. Digital models can also be used to evaluate the feasibility of competing proposals for the placement of fragments. As an example, Johnson will discuss how the model of 1QM sheds light on the manuscript’s last major material debate, the placement of the fragments that constitute col. 19. This column contains a recension of a hymn found in col. 12, which has led Hanan and Esther Eshel to suggest that it may belong to a different manuscript. This workshop offers the model of 1QM as a proof of concept for the use of scrolling digital models both in the process of reconstructing Dead Sea Scrolls and in visualizing arguments for the placement of fragments.

Posted in Blog Tagged with:

New Middle English Dictionary interface — now with readable sourcecode!

The medieval social media communities, at least those that do work in Middle English, are abuzz about the new Middle English Dictionary search interface that Henry Litwhiler put together. And it much much slicker than the now over decade-old interface that comes with the dictionary proper. I have a little bit of trepidation about the use of Levenshtein distances based on my experiences with the Medieval Electronic Scholarly Alliance’s fuzzy search, but he’s using both the headwords and variants so that should help solve the problem. It’s the method I planned to use for word lookups for the Minor Works of Lydgate project (using what I suspect is much the same method — web-scraped entries reduced to json files containing headwords and variants), and I have joined the no-doubt growing list of people inquiring if he has an API or something to include his work — with proper attribution — on other people’s projects.
What I really want to talk about in this post, though, is his source code. Often source code is obfuscated, which is supposed to make the code faster to run on the web when you have millions of hits. But it also makes the code hard for people to read. What I really like about Litwhiler’s work here is that his code is human-readable. If a variable has a meaning beyond being a simple variable for computational purposes the meaning is understandable by people looking at that source. And how the source fits with what you see when you look at the tool is readily apparent.
It’s a simple approach, but it’s not one that is often pursued in academic digital projects because they’re often using off-the-shelf products that obfuscate their code or have programming staff that do that work themselves as a matter of course. Obfuscation hurts the pedagogical value of a digital project, it’s something I would like to see less off in academic work, and so I felt like I should foreground it here.
Posted in Blog Tagged with:

Working with OMEKA at DH@Guelph 2017

The DH@Guelph Summer Workshop was held at the University of Guelph the first week of May 2017. I attended the OMEKA Workshop, which was co-taught by University of Guelph Special Collections Librarian Melissa McAfee and Library Associate Ashley Shifflett McBrayne, along with a variety of guest presenters. The four-day workshop allowed participants to work on developing their own omeka.net site.

OMEKA is a web-based exhibition software with both a .net and .org hosting. Omeka.net has a free basic plan and a tiered pay plan that grants access to more file upload space and theme options. For the purpose of the workshop we used omeka.net with a trial-run of Platinum privileges, but I will be migrating my site to McMaster’s omeka.org at the end of June.

More about the workshop, held every May, is available here.

I made the decision to experiment with OMEKA as a potential option for building my digital archive project, in part because it is fairly easy to build with minimal computing experience and a commonly used platform among historians and libraries to exhibit materials. For my project specifically, OMEKA allows me to display primary source materials collected from various archives during my field work in England and Uganda in one place. Further, the exhibit function of the software accommodates the inclusion of accompanying descriptions with the uploaded (and downloadable!) materials. A function which I use to append biographies of the intellectuals I study in my dissertation as well as short analyses on the sources themselves.

OMEKA has a both a front-end and back-end, like any website. The content and appearance of the site can be managed using the dashboard, which does not require any coding skills. The website is organized using tabs which are used to navigate. These are fully customizable from the back-end (to be discussed later in this post). This is what my omeka.net website looks like from the front-end. I have included screen-shots throughout as it is incomplete and so has not yet been made publicly available.

OMEKA offers several options for displaying materials: Exhibits, Collections, and Individual Items.I used the exhibit function to define several areas of my research and pull together sources in one section of my website. The collections function can be used in a similar way, and using both functions may be useful if managing a large corpus of material. Below shows the two Exhibits I have set up.

The first includes biographies of each of the key African intellectuals I examine in my dissertation with .pdf files of their written works uploaded. At this stage the .pdf files are not OCR’d, and therefore not searchable, but it is my intention to make some of these documents searchable over the summer using Abbey FineReader (more to come on this and the challenges of OCR in a later post!). The second exhibit includes books written by European adventurers and missionaries about Uganda from the mid-19th to early 20th century. Most of these books have been digitized by the Internet Archive program, which means they are in the public domain (having been published before 1923) and available publicly online. The motivation for uploading them here then is not to make them accessible but to bring them together in one place and claim their relevance to the writing of Buganda’s intellectuals.

Exhibitions or collections are used to organize materials and employ a nesting function to navigate to more specific materials on the left side of the screen. Within these Exhibits are pages, which I have used in a two-tiered system. The first level of pages are for each intellectual, and then nested within those the second level has a page for each document written by each intellectual.

Attached to these secondary pages are Individual Items, which are the actual document, in my case .pdf files. OMEKA supports uploading a variety of other types of files, including various visual, audio and video file formats. Below you can see how the .pdf file, taken from the Internet Archive, is embedded in the page.

Some of the limitations of OMEKA is the customizability of its appearance. The omeka.net version of the platform only offers seven possible themes. Without expertise in HTML or CSS coding there is very little that can be done to customize the appearance of your site. This is a good transition to a little discussion of how OMEKA functions from the back-end. As someone who is not proficient at computing and a true beginner to coding, OMEKA is a fairly easy tool to learn how to use. The back-end uses a Dashboard, like most web-based softwares, to manage content and appearance. This is what the Dashboard looks like.

The part of the back-end I want to spend the most time talking about is adding Individual Items (which includes uploading files) and creating Metadata for these items.

This is what metadata looks once completed.

There are many challenges to confront when creating metadata for items/objects/texts, which I will explore in a future blogpost.

There is a LOT of material out there on working with OMEKA, so if you are interested in this tool please explore beyond my non-expert overview above. The more popular OMEKA becomes in the DH community, the more debates there are surrounding its use. There are also a number of different web-based tools that do similar things (Drupal, WordPress), and may not require institutional financial support for hosting.  After working with OMEKA, I have determined it will work well as a prototype of my digital archive to show to potential parters/collaborators in Uganda to hopefully encourage their participation in a more robust and interactive future project. In the future, I hope to use Mukurtu as a repository and exhibition tool as it is specifically designed for indigenous knowledge preservation and addresses issues of ethical collaboration and cultural theft (read more here).


Posted in Blog, Uncategorized Tagged with: ,

Plotting a Plague Pandemic

Little Hitchhikers

Humans are more than just singular beings, we are “Superorganisms”: vessels for thousands of small life forms that make up our microbiome. We have complex relationships with these resident microbes, ranging from beneficial to parasitic, which are influenced by numerous biosocial factors (diet, environment, genetics, antibiotic use, etc.). Exploring the effect these microorganisms have upon us is a hot topic of research, especially here at McMaster, as it is apparent that the microbiome plays a major role in health and disease, both physically and mentally.

But my research doesn’t seek to reinvent health care approaches. Instead, I exploit the human-microbe relationship to tell stories about how humans have migrated and exchanged diseases throughout history. And sometimes, the little microbes that have hitchhiked with us across the globe are even better storytellers than humans themselves.

Human History by Proxy

As early humans dispersed throughout the globe, human populations became geographically separate and diversified. Simultaneously, our microbiomes were co-evolving along with their human hosts resulting in the distinct geographic distributions of disease we see today. The stomach bug, Helicobacter pylori, is present in most human populations and its parallel evolution with humans has been used to reconstruct ancient patterns of human migration, dating all the way back to the dispersal out-of-Africa (Figure 1). Microbe evolution can also tell us about recent disease dispersal, such as genetically tracking the 2010 Haitian cholera epidemic back to the arrival of UN Nepalese peacekeepers.  Bacterial epidemiology can also reveal altered human-environment interactions as the increased prevalence of zoonotic diseases like malaria and plague can be linked to ecological instability (ex. extensive deforestation). Microbial evolution serves as a very unique and powerful line of evidence, especially when contextual information (such as historical records) are sparse or inaccurate.

Figure 1. Helicobacter pylori global dispersal (Yamaoka 2010).

So why not just sequence human DNA rather than using a microbial proxy? Humans evolve relatively slowly (a much longer generation time) and don’t accrue as many DNA mutations within short time frames. In comparison, bacteria replicate extraordinarily quickly, with generation times ranging from 30 minutes to several hours. That means new mutations are occurring constantly, generating new data points for us to assess biological relationships, and potentially acquire finer resolution. In my case, I’m interested in both the movement of humans and how they exchanged infectious diseases in the past, therefore I turn to both bacterial DNA and historical records to reconstruct these processes.

Plague and Phylogeography

For my doctoral research, I’m examining one of humanities deadliest and ancient diseases: plague. This infectious disease is infamous for playing a major role in historical pandemics such as the Roman Plague of Justinian and the Medieval Black Death, with 30-50% of human populations perishing during these outbreaks.  While mortality estimates like these have yet to be observed in the modern era, plague is still entrenched in many geographical regions in the world with the most topical being the ongoing Madagascar Plague outbreak of 2017.

Plague’s tendency to appear within a population, seemingly out of nowhere, and vanish in an equally mysterious fashion has long intrigued and frustrated researchers. Despite more than a century of comprehensive research, the origins and global routes of plague spread remain obscure. This is in part because plague is an ecologically complex disease, most commonly spread via infected rodents and their fleas, but can also spread directly between humans. In addition, the limited explanatory power of current models has also been attributed to the sparseness and ambiguity of historical mortality records, leading to high levels of uncertainty. In response, alternative lines of evidence have been cleverly sought out. Furthermore, current trends promoting the creation of open access, digital databases have greatly facilitated cross-disciplinary work and opened up previously inaccessible geographic regions and time periods for exploration.

My Project

My research continues this trend of novel lines of evidence by analyzing the ancient DNA (aDNA) of the plague bacterium in order to reconstruct disease dispersal events in human history. By extracting plague DNA from archaeological remains found in epidemic cemeteries, it is possible to identify key genetic mutations that link related bacterial strains and infer distinct waves of infection.  These projects encompass a “phylogeographic” approach, which integrates phylogenetic (evolutionary) relationships with geographical relationships in order to reconstruct the spread of this infectious pathogen. The extraction, sequencing, and evolutionary analysis of plague aDNA is currently being undertaken at the McMaster Ancient DNA Centre.

The project being conducted in collaboration with the Sherman Centre for Digital Scholarship, aims to put the “geography” in the “phylogeography” of plague. Informative geospatial analysis of past pandemics is highly dependent on having  strong foundational information on modern pandemics: a foundation that currently does not exist within plague literature. This is not for lack of data, as over 600 strains of plague have been sequenced and are publicly available through digital databases. This focused project therefore aims to curate, contextualize, and analyze the digital metadata associated with these plague strains. Armed with this comparative data, I will then be able to start exploring hypotheses such as:

  1. Did trade routes and migration events influence the distribution of plague?
  2. Are there ecological zones that correlate with increased prevalence of plague?
  3. Does genetic evidence complement or contradict archival-based models of plague spread?

My objective is to expand our epidemiological knowledge of plague, in a way that improves our understanding of the interplay of factors contributing to modern re-emergences, as well as the historical events that triggered past pandemics. The combination of molecular genetics and geospatial analysis, driven by humanities-focused questions offers a unique lens through which to reconstruct the fluctuating patterns of human connectivity and ecological interaction that have shaped our relationship with infectious disease.

Project Organization: The Beginning of the End?

When I’m trying to organize a project, my favorite place to start is… at the end. What kind of finished product do I want to end up with? And how might the answers to these questions be visualized and explored? There’s no shortage of geospatial tools to explore disease epidemiology, so I created 5 criteria to assist in program selection:

  1. Statistical Framework – Hypothesis testing is a must, visualization alone is insufficient.
  2. Disparate Data – Incorporate heterogeneous metadata and account for prior information about evolutionary relationships. (Bayesian GIS anyone?)
  3. Standardized Output – Produce an output file that will be recognized by other geospatial and visualization software. Proprietary file formats are to be avoided.
  4. Aesthetics – Try to avoid the Google Maps API (personal preference).
  5. Learning Barriers – Free, open source, reduced learning curve or plentiful training resources.

As expected, no program satisfies all 5 criteria, and thus I will likely use a combination to highlight each tool’s strengths and complement weaknesses. There are a variety of R packages that seem promising, the best candidate being BPEC (Bayesian Phylogeographic and Ecological Clustering) (Figure 2). This tool has powerful analytical potential and can identify meaningful geographic clusters in your data.  To go deeper into routes of spread, SpreaD3 is well-designed for epidemic source tracking and takes as input files from programs which I am already using (Figure 3). With a faint idea of what I need to prepare for downstream, I was better prepared to select appropriate datasets.

Figure 2.  Phylogeographic clustering of frog populations using BPEC (Manolopoulou et al. 2017).

Figure 3. Ebola virus spread in SpreaD3 (Bielejec et al. 2016)

Project Organization: Let’s Get Started

For the previously identified programs, there are three mandatory pieces of information I need for each outbreak record:

  1. DNA sequence data to reconstruct evolutionary relationships.
  2. Geographic location (ideally latitude and longitude).
  3. Collection year (time point calibration).

Accessory variables that would be very interesting to test include host (rodent, human, camel, etc.) but are rarely made available by submitters.

Based on my review of plague literature, I was expecting to find about 150 plague genome records as this number seemed to be representative of current publications. I began my search scouring online genome repositories (NCBI, ENA, DDBJ) to identify datasets. To my great surprise, I found over 600 plague genome sequencing projects which either had been 1) published on but quality could be improved, 2) published on but only in a limited descriptive sense, or 3) has no publications associated with it. Despite being overwhelmed with an unexpected amount of data, I’m still very excited by the potential to contribute something new and meaningful with data that is mostly untouched.

The problem is that this data sits behind a very scary wall: big data science. The actual genetic sequence data is enormous and complex (we’ll save that for another post) and the metadata is messy with many missing fields. The messiness can in part be cleaned up with tools like OpenRefine, but the missing data either means I’m going manually hunting through Supplementary Files, or a lot of data is getting kicked out.

I then investigated how to query, download, and parse over 600 files of metadata into a meaningful table I could import into downstream applications. I tested out a number of APIs (Bioconductor, Biopython, SRAdb, MetaSRA) but wound up dissatisfied in part with most of them. My current strategy is rather ad-hoc, as I intend to combine multiple programmatic APIs, web-browser GUIs, and my own python scripts to build up a geospatial relational database. The result is functional, but unwieldy, and is rather limited in application to my own project (Figure 4). I’m also currently missing about 300 database records, largely because of consistency issues inherent to repositories that are governed by user-submission.

Figure 4. One table (among many) present in my SQLite relationship database.

Next Steps

My next goal is to get the database pipeline to a point where I’m satisfied it has scraped as much information as it can via automation. From there I will move to manually going through associated publications to fill in geographic location and date where I can. At that point, it will be time to start experimenting with geospatial tools to figure out how I’m going to visualize hundreds of years of global disease dispersal in an informative manner. I’m looking forward to geospatial workshops in the new year, and I’ll be showcasing some preliminary maps next blog post!


Figure Reference

Bielejec, F., Baele, G., Vrancken, B., Suchard, M. A., Rambaut, A., Lemey, P. (2016) SpreaD3: interactive visualisation of spatiotemporal history and trait evolutionary processes. Molecular Biology and Evolution. 33 (8): 2167-2169.

Manolopoulou, I., Hille, A., Emerson, B. (2017). BPEC: An R package for bayesian phylogeographic and ecological clustering. Journal of Statistical Software. arXiv:1604.01617v2

Yamaoka, Y. (2010). Mechanisms of disease: Helicobacter pylori virulence factors. Nature Reviews Gastroenterology & Hepatology. 7: 629–641

Posted in Uncategorized

Developing A Photogrammetry Toolkit For Rapid, Low Cost And High Fidelity 3D Scans

As a current PhD student in the Communications Cultural Studies and New Media Program at McMaster University, my research revolves around the application of new media to create personal archives for individuals or relatively small communities, groups and peoples, primarily marginalized populations, including: ageing populations, people of colour, indigenous peoples, people with accessibility needs and migrant populations, especially those displaced by climate disaster, armed conflict, and global economics.

These new media archives are rooted in enabling the community itself to accessibly and rapidly generate their own archival content in response to the inability of traditionally large institutions like museums and government run organizations to include marginalized people, especially in the face of rapid change caused by climate disaster or armed conflict. The new media forms I intend to include in my research are: audio recording, photography, 3D scanning, and 3D printing. For the Sherman Centre Graduate Residency in Digital Scholarship I will mainly be focused on photogrammetry technology. Photogrammetry is a 3D scanning technique that requires an object to be photographed from multiple angles, these photographs are then compared in a computer program and a full colour 3D mesh can be created. This mesh can be very high resolution, allowing objects to be viewed in 3D on a computer or virtual reality headset or 3D printed, they can also be 3D printed in full colour out of various materials.

developing a toolkit for accessible and rapid 3D scanning and printing using photogrammetry and pro-sumer 3D printers to create personal archives that can be implemented by small community groups and not-for-profit organizations to create personal archives and digital scholarship. This research will culminate in a final working prototype that also functions as a work of speculative design that is a break the glass in case of emergency type of photogrammetry scanner that can be used in areas effected by catastrophe including climate disaster and, armed conflict that may lead to mass migration or evacuation,

where personal artefacts and heirlooms may not be able to be transported or preserved and need to be documented in 3 dimensions quickly. This system will be based on low-cost open-source electronics; mainly raspberry pi and 3D printed protective housing.

The aforementioned toolkit would include a digital wiki/archive of programs and tutorials on how to use several programs to create the kind of media I will be generating during the residency. This toolkit can be useful for individuals, community groups and larger institutions looking to enter into photogrammetry based 3D scanned and 3D printed archives.

During my residency in the Sherman Centre for Digital Scholarship I have begun using Agisoft Photoscan (http://www.agisoft.com/) a photogrammetry program that is a prosumer style program, featuring some professional features while maintaining user friendliness and relatively low cost. I have done several rounds of testing with other programs and I will continue to update the blog with my results.

Below are some examples of the process and results using Agistoft PhotoScan to scan a Molcajete and Temolote:

Above, the molcajete and temolote set can be seen with the 3 LED lights used to create even lighting. Using even lighting like this seems to have helped generate accurate scans.

Above, the temolote is ready to be photographed from several angles around a radius of about 4 feet in a dome pattern. I used an 85mm prime lens with an f-stop of f22 at 18 megapixels. These settings help to reduce distortions that might occur that make the scan reconstruction difficult.

Above, a view of the Agisoft PhotoScan software, after the images were aligned and a mesh generated.

In the video above, the images taken around the temolote can bee seen. These images are processed to create a 3D mesh that is also textured with these photos, giving a full colour model.

From there, an OBJ or STL 3D file is exported, as depicted above. This image was rendered in Blender3D. This file can be used to 3D print.

Printing the scan on a Ultimaker 2+ in natural PLA filament.

The result is a high detailed 3D print. The settings of the 3D scan file and 3D print file (GCode sent to the printer) can be fine tuned to yield greater detail, while taking much longer to generate and print.


Below are some readings you can use to learn more about the subjects of 3D scanning, 3D printing and speculative design:

Morehshin Allahyari

Material Speculation: ISIS (2015-2016)
“Material Speculation: ISIS” is a 3D modeling and 3D printing project focused on the reconstruction of 12 selected (original) artifacts (statues from the Roman period city of Hatra and Assyrian artifacts from Nineveh) that were destroyed by ISIS in 2015. “Material Speculation: ISIS” creates a practical and political possibility for artifact archival, while also proposing 3D printing technology as a tool both for resistance and documentation. It intends to use 3D printing as a process for repairing history and memory.



Near Future Laboratory

At the Near Future Laboratory, our goal is to understand how imaginations and hypothesis become materialized to swerve the present into new, more habitable near future worlds. We work from a variety of conceptual and creative platforms to help explicate context and explore future states, including various calibers of research — from the field to the desk to the lab bench and everything in between.



Dunne & Raby

Today designers often focus on making technology easy to use, sexy, and consumable. In Speculative Everything, Anthony Dunne and Fiona Raby propose a kind of design that is used as a tool to create not only things but ideas. For them, design is a means of speculating about how things could be—to imagine possible futures. This is not the usual sort of predicting or forecasting, spotting trends and extrapolating; these kinds of predictions have been proven wrong, again and again. Instead, Dunne and Raby pose “what if” questions that are intended to open debate and discussion about the kind of future people want (and do not want).

http://www.dunneandraby.co.uk/content/projects/756/0 http://www.dunneandraby.co.uk/content/projects/75/0

Posted in Blog Tagged with: , , , , ,

Some Reflections on the Intersection between Conventional and Digital Approaches to Scrolls Research

Over the last seventy years, Dead Sea Scrolls research has carried on in a permanent state of revolution, with new methods, technologies, and bodies of evidence overturning or qualifying old consensuses. To current PhD students like myself, who are dissertating on the Scrolls, many of the recent advances in digital approaches and tools appear to be changing the face of the discipline; however, to seasoned scholars this revolutionary change is nothing new. Scrolls research has always been like Menelaus wresting an oracle from the shape-shifting Proteus—change and adaptation are the norm. The ill-conceived myth of the triumph of digital scholarship over conventional scholarship simply does not apply. The key consideration for early-career Scrolls scholars is how to follow in the footsteps of earlier generations in usefully integrating new tools and approaches without abandoning the conventional. During research that I carried out this summer in Jerusalem on the Thanksgiving Hymns from Qumran (1QHodayota), I frequently found myself combining the old with the new to address pressing research questions.

1QHa is a particularly challenging scroll to study because unlike many of the rest of the Dead Sea Scrolls, new high quality images, such as high-resolution multispectral images or RTI (reflectance transformation imaging) images are not yet available. Even if they were, however, the plates in the Dead Sea Scrolls of the Hebrew University,[1] the Shrine of the Book images, and the plates in the edition of 1QHa in volume 40 of the Discoveries in the Judaean Desert series would still be indispensable.[2]

The older images document the state of the manuscript in the years after its discovery and in the process of its unrolling—a resource that new digital tools or approaches cannot replace. Consequently, I find myself drawing heavily on conventional editions and photographs, even as I am making digital reconstructions of columns in GIMP and rolling them in three-dimensional environments to compare patterns of damage in digital modeling suites like Blender. When creating a reconstruction of a scroll in its rolled state, it is best to use these early images so that any modern shrinkage, decay, or damage are not baked into the model. Thus, even digital Scrolls research is forever anchored to those initial images.

In addition, when working on problems of material reconstruction, there are questions that cannot be answered relying solely on either editions or digital tools. Scrolls are three-dimensional objects, and certain aspects are not fully captured by existing images; e.g., texture, thickness, shrinkage, light damage, and the extent of delamination. What appear in photos to be patterns of repeating damages, or potential joins between fragments, may be ruled out upon first-hand inspection of the fragments themselves, especially with the help of conservators who are intimately familiar with the physical manuscripts and causes of damage. I found this to be the case when I visited the Shrine of the Book, where 1QHa is archived. Hasia Rimon, a conservator who has worked closely with the Shrine’s manuscripts since 2012, helped me to see and understand the condition of the manuscript and how it has been conserved since its discovery. The same applies for conservators at other institutions that conserve Dead Sea scrolls, most notably the Israel Antiquities Authority, which is responsible for the vast majority of the Judean Desert manuscripts, including the other Hodayot manuscripts.

Shrine of the Book, Israel Museum

Shrine of the Book, Israel Museum. Photo Credit: Author.

Furthermore, a visit to the Shrine of the Book or the IAA is the only way of tapping into the institutional memory of the discovery of the Scrolls and their condition over the course of the last seventy years. For example, anyone who has visited the Shrine of the Book will know of Irene Lewitt’s formidable knowledge of the whereabouts of the Shrine’s scrolls and their photos over the last 70 years—especially that of 1QHa and the other Hebrew University scrolls. A similar knowledge-base exists at many of the institutions in Jerusalem with historical ties to the Scrolls, like the Orion Center, the École Biblique, the Rockefeller Museum, and the Albright Institute.

One of the perennial methodological concerns for digital scholarship is how to use new tools and approaches judiciously and in ways that actually advance the field. For Scrolls research, implementing new digital approaches requires a thorough consideration of the conventional resources, tools, and institutional memories to gain new insights. This combination of innovation and convention is nothing new—it is business as usual for Scrolls scholarship in making use of every available means to yield new insights into the Dead Sea Scrolls.

[Expanded from 2017 Newsletter of the Orion Center for the Dead Sea Scrolls]

[1] E. L. Sukenik, The Dead Sea Scrolls of the Hebrew University (Jerusalem: Magnes Press, 1955).

[2] Hartmut Stegeman and Eileen Schuller, DJD 40.

Works Cited

Schuller, Eileen and Hartmut Stegemann. Qumran Cave 1.III: 1QHodayota with Incorporation of 1QHodayotb and 4QHodayota-f. DJD XL. Oxford: Clarendon, 2009.

Sukenik, E. L. The Dead Sea Scrolls of the Hebrew University. Jerusalem: Magnes Press, 1955.
Posted in Blog Tagged with: , ,

Making Uganda’s Intellectual History Digital: Knowledge Preservation and Ethical Considerations

As a historian of Africa and the colonized world, my research continuously pushes me to consider the unequal power relationships that govern the preservation and presentation of knowledge about the past in these places. Asking questions about how the history is being done and by whom and with what sources are necessary for undertaking ethical scholarship. How does the academy’s presentation and consumption of sources shape their historical meaning? How can digital tools be used ethically to develop/enrich our fields of study? Are the Digital Humanities neo-colonial?

My name is Samantha Stevens-Hall and I am a 5th year PhD student in the History Department and a returning Graduate Fellow/Resident at the Sherman Centre this year. I am a historian of Africa, more specifically the intellectual history of the kingdom of Buganda, the predecessor to modern day Uganda, in the 19th century during the transition to British colonial rule. My dissertation research took me to several continents over the course of 2 years during which I visited a variety of libraries and archives. It was my experiences in these archives that have brought me to the Digital Humanities. The pertinent materials to doing this intellectual history are not housed within one country, let alone one continent. While Britain, as the metropole, has fairly rich archives on this period, Uganda itself holds only mostly fractured and poorly preserved copies, if they even have any, of the intellectual works of some of the key figures in their countries’ past. And so, I began to consider how to make these materials more available, both within Uganda, and elsewhere outside of Britain, so that this history was no longer restricted to those with the monetary and institutional support needed to undertake long distance fieldwork research.

My dissertation deals with networks of knowledge and knowledge transfer during the period of transition from oral to written culture in Uganda, which coincided with the transition to British colonial rule. I am interested in what happens to knowledge and sources when they are transferred between mediums, from oral to written, typescript to microfilm, catalogued in physical archives to uploaded to the web as digital sources. In the case of the sources used in my dissertation, from oral to hand written in the vernacular, and from written to typescript translated into English, and finally partially digitized in the contemporary period. Digitizing some of these pieces of intellectual history offers the opportunity for discussion about what happens to sources when they are transferred from one medium to another. As much of my thesis deals with themes of translation and the transition from oral to written culture, I am also interested in what happens to the colonial archive and the dissemination of colonial knowledge when sources are made available digitally.

My DH project is an open access digital archive of primary sources and supplementary materials in African intellectual history.This archive would serve as a repository for endangered documentary materials and as an exhibition to curate and display the intellectual history of Uganda. The materials incorporated come from the archival work done for my dissertation; these include biographies of a few key Ugandan intellectuals who are the focus of my dissertation, with appended excerpts from their works. This archive will bring together scattered sources into one easily accessible online resource. For my dissertation digitizing some of these pieces of intellectual history offers the opportunity for discussion about the life cycle of documents, and what happens to sources when they are transferred from one medium to another. As much of my thesis deals with themes of translation and the transition from oral to written culture, I am also interested in what happens to the colonial archive and the dissemination of colonial knowledge when sources are made available online. Further, it would make a contribution to the DH community through its mandate of decolonizing the archive and attempting to bridge the “digital divide” between the West and Africa in computing access and capabilities.

Last year my proposed project was a prototype digital archive containing documents and other materials pertaining to the history of Uganda from the 19th century collected during my dissertation research. While this has not changed significantly from last year, the goals of the project have shifted somewhat, and what is hypothetically possible to complete in 12 months, and the steps necessary to meet these goals, has become much more well-defined. Last year I proposed building an online archive and exhibition that was structured around three portfolios of Uganda intellectuals from the period of transition to British colonial rule in Uganda in East Africa during the last decades of the 19th century and the first decades of the 20th. While I am no longer sure that organizing the exhibition biographically makes the most sense, it will definitely be divided into folders arranged along thematic, temporal or biographical lines with each containing document files and appended relevant information and analysis. The excerpts would come from materials collected during my archival work over the past three years. Some of these materials have been published and others are from unpublished manuscripts. The excerpts would be selected to show the dynamic character and variety of intellectual activity in Uganda in a way that supports the key arguments in my thesis that these intellectuals were multidimensional figures engaged in a vibrant culture of knowledge exchange and debate over representations of the past. The archive will bring together materials that are now held in disparate and distant archives across several continents and not digitized, prohibiting their study without extensive funding for travel. Creating an Open Access archive would make the materials available much more widely. This will foster new study’s of Uganda’s intellectual past from within the country’s own institutions and contribute to both the preservation and dissemination of knowledge about the country’s past.

On a final note, most of the documents I am working with are not easily accessible outside of archives or university libraries in the West. The archives that do house some of these sources in Uganda are in poor condition; if not catalogued and digitized soon risk complete destruction. Beyond my dissertation work I am deeply interested in the tenuous relationship between history and politics in contemporary Uganda. History if often a “dirty” word and no national histories are taught in primary or secondary school. The study of history is overshadowed by disciplines with more applicable career skills at the nation’s universities. That being said, there is an interest among some academics and politicians in preserving the region’s history and are willing to undertake the massive project of preserving the archives and turning the tides of public opinion back towards valuing history. My hope is that this project will be a step in the right direction and provide a possible template for future archival repositories, community engagement and ethical knowledge preservation and dissemination.

Posted in Blog, Uncategorized

Visualizing Climate Change and Environmental Disaster in Ontario

On 10 July, 1911, one of the deadliest forest fires in Ontario history ripped through the north, totally destroying the new gold rush community of Porcupine Lake. When it was all over, the blaze claimed seventy-three lives, burned over half a million acres, and caused millions of dollars in property damage. People burned to death in their homes, suffocated in mine shafts, and drowned while trying to take shelter in the storm-ravaged lakes. The nascent mines, surrounding communities, railways, and other infrastructure were reduced to twisted metal and rubble. The blaze made international headlines, and was called by the media the “worst disaster in Ontario history.”[1]

In an era of increasingly regular climate-related natural disasters, the Great Fire of 1911 proves instructive. What human and environmental factors made the fire so catastrophic? How did climate shape the fire and human responses to it? Finally, we all know that the climate is changing, but what does that look like on the local scale? How does 1911 compare to today?

Regular fire cycles are a natural part of Ontario’s northern forest ecology – burns like the Great Fire of 1911 occur in roughly 10 year intervals.[2] Northerners knew about these burn cycles and had a long history of living with fire. The annual report for the Ontario Bureau of Mines recorded burned forests every summer of its tenure in the province, starting in 1891.

Given the regular, predictable nature of northern fire, the second chapter of my dissertation argues the destructiveness of the Great Fire of 1911 stemmed from a combination of factors. In their hurry to exploit local gold, newcomers had not thought much about fire-proof construction. Adding to this problem, increased population concentration, insect attacks in 1905, and a policy of active fire suppression in Ontario led the build-up of excess fuel on the landscape.

At the same time, Ontario (along with the rest of North America) entered an especially hot, dry period which peaked in the summer of 1911.

The warm, dry period of the early twentieth century is not something I invented. Fire historian Stephen Pyne lists this early twentieth-century warming as one of the major contributing factors to the devastating forest fires in America in 1910 in his book Year of the Fires: The Story of the Great Fires of 1910. But how did these climate patterns specifically effect Ontario, and can we, as Pyne has done for the United States, connect the Great Fire of 1911 to climatic warming in Ontario during these years?

The government of Canada keeps historic climate data on their website. The data is divided by weather station. Some go back further than others. In the case of Porcupine I was immediately hamstrung by the fact that Porcupine records only go back to 1922. So lets be clear: I cannot actually say what was going on in my study area in 1911 climatically. In fact, the furthest north I could get was Ottawa. However, at the risk of re-affirming Ottawa’s perceived place at the center of the universe, I can say with confidence that if it was hotter and dryer than normal in Ottawa, it was probably hotter and dryer than normal in Porcupine – the climates are close enough for the Ottawa data to be useful for seeing broader trends in Ontario climate for the early 20th Century.

The data shows that Ottawa recorded its hottest ever days on 3 July, 9 July, and 10 July (the day of the fire) in 1911, well below average rainfall, and light snow packs.[3] I can corroborate this finding with anecdotal evidence. The Globe recorded fifty-eight heat-related deaths and dangerously low city reservoirs on 8 July 1911.[4] Dominion Horticulturalist W. T. Macoun recorded a hot, dry spring (which shortened the flowering season for many blooms) and noted that “July was an extraordinarily hot month, one of the hottest ever experienced.” According to Macoun the mean temperature in July was a scorching 97.8 degrees Fahrenheit (36.5 degrees Celsius), nights remained hot, and rainfall was light.[5]

Just to get a sense of how hot it was in July of 1911, and to get a sense of what the government climate data looks like, take a look at the screenshots below. The two tables show the first 15 days and temps of Jul 1911 vs. Jul 2017 (degrees Celsius).


To look more closely at 1911, but also to think about how climate has changed over time long term, lets plug the climate data into some visualization software.

Here’s a graph I made in excel’s “pivot” showing the maximum high temps for each year for the entire period of record.

Now the obvious story here is 2012 (woah), but we can also see a cluster of high temperature years at the beginning the 20th Century, between 1901 and 1917, which is the hot period I talk about above.

Here’s precipitation:

Here we can see that the early 20th Century saw low average precipitation, again supporting the idea that Ontario suffered a particularly dry period before the great fire. And if we look ahead to the 21st century, precipitation begins fluctuates to greater low/high extremes.

And here’s snowfall, this time in Tableau.

That big dip between 1905 and 1915 is 1911, when snowpacks were abnormally light. But again, there is an interesting story starting around 1975 when we can see a real sustained drop in average snowfall.

The three graphs lend weight to my argument that 1911 was a particularly hot, dry year in Ontario, and they show how that period fits into the longer story of our climate.

They also show the gradual effects of climate change since about the mid-twentieth century.  Climate change is a gradual, nearly invisible process in our day to day lives, but looking at the historic climate data we can see how it has already impacted local patterns in precipitation and temperature in our communities.

Given the changing climate and the increase in extreme weather events in the 21st century – including forest fires – what lessons can we draw from the Great Fire of 1911?

In the aftermath of the catastrophe, the Great Fire of 1911 became memorialized as a transitional moment or a “baptism by fire” for mining in Northern Ontario. Porcupine’s ability to survive and thrive after the fire continues to be cited as a testament to the community’s toughness and tenacity on a difficult frontier. By 1914, Porcupine had gone from a relatively chaotic small-scale gold rush to a large-scale, low-grade, efficient, deep-mining industry on par with the biggest mining enterprises in the world. In my dissertation, I argue that the economic destruction of smaller mining companies allowed bigger syndicates to buy up valuable land and take control of best deposits after 1911.

How did the industry adapt to the disaster? In order to protect their assets against future calamities, Porcupine assembled forest-fire fighting infrastructure including fire-proof buildings, fire towers, and a full time fire-fighting force.[6]

These measures failed: Porcupine burned again in 1916, would experience a major mine fire in 1928, and is periodically plagued by bush fires right to the present.[7] Moreover, by focusing on fire prevention, Porcupine failed to adapt to other environmental problems, including flooding, land scarcity, food and water insecurity, and mine waste disposal – all of which caused significant problems down the road.

This is sort of a depressing take-away. But with a little more research, I think I can dig into this evidence for some examples of successful adaption to climate change and extreme weather in Ontario. In general, when facing environmental problems, did collaboration with community members produce better outcomes? What was the role of international science in environmental adaption? What specific characteristics of the successful companies post 1911 helped them to endure the trauma of the fire? There are other anomalies in those long-term graphs that produced less catastrophic histories – what happened during those years that allowed mining to proceed unscathed?

The answers to these sorts of questions could contain lessons that can potentially help Canadian resource communities adapt in the future.

Disclaimer: I am not a statistician, so I welcome any feedback and/or pointing out of my glaring errors.

Further Reading:

Global Historical Climatology Network

Historical Climatology

Climate History Network


[1] “Porcupine Disaster Intensifies; Refugees Fleeing from the Scene,” The Globe, 14 July 1911.

[2] I.D. Thompson, A. Perera, and David Euler, Ministry of Natural Resources, Ecology of a Managed Terrestrial Landscape: Patterns and Processes of Forest Landscapes in Ontario (Vancouver: UBC Press, 2000), 41-42.

[3] Government of Canada, “Ottawa data” in Almanac Averages and Extremes, Historical Climate Data, Accessed 18 May 2017, http://climate.weather.gc.ca/.

[4] “Record is now fifty-eight deaths,” The Globe, 8 July 1911.

[5] W.T. Macoun, “Report of the Dominion Horticulturalist,” No. 16, 31 March 1912, in Second Session of the Twelfth Parliament of the Dominion of Canada Session 1912-13 (Sessional Papers) (Vol. 9), 86.

[6] The Porcupine Advance documents a long history of fire protection measures. See “Can New Bush Fires be Prevented?” Porcupine Advance, 9 August 1916; “Town Council Passes Fire Bylaw,” Porcupine Advace, 22 November 1916; “The Heliograph used in Firefighting,” Porcupine Advance 31 January 1917; “Government Plans for Preventing Fires,” Porcupine Advance 14 February 1917; “Modern Electric Fire Alarm System,” Porcupine Advance, 19 September 1917; “Getting Ready for Fire Menace,” Porcupine Advance, 5 May 1920; “Cultivated Fields Would Remove Fire Menace,” Porcupine Advance,11 October 1922; “Mile Firegaurd Around Northern Town,” Porcupine Advance, 8 November 1922; “Fire Guard Around Timmins,” Porcupine Advance, 30 May 1923; “New Fire Towers,” Porcupine Advance, 9 January 1930.

[7] “Terrible Fires Sweep Northern Ontario,” Porcupine Advance, 5 August 1916; “Early Bush Fires,” Porcupine Advance, 2 June 1920; “45 Die in Big Fire,” Porcupine Advance, 14 October 1922; “Fire Rings About Towns,” Porcupine Advance, 4 November 1922; “Fire Hazard,” Porcupine Advance, 4 February 1923; “Final Hollinger Fire Report,” Porcupine Advance, 18 October 1928; “Forest Fires Rage,” Porcupine Advance, 1 August 1929.

Posted in Blog, Uncategorized Tagged with: , , , , , , , ,

Putting Health Beliefs on Maps

As an extension to my dissertation, this project stems from a long-standing interest in global health outreach, gender studies, and public health policy. Successful public health policies rely on a deep understanding of the various health beliefs that underpin health behaviors. This pilot project will be an interactive online archive of health beliefs that underpin the various anti-vaccine movements around the world. This project will draw on Arthur Kleinman’s 1978 anthropological framework of the internal structure of the health care system (Figure 1), of which are three belief systems that influence our health practice of choice: First, the “professional sector” represents modern/Western medicine (e.g., hospital care, medical school trained physicians, immunization; concepts used include: evidence-based, diagnosis, treatment, prescription, prognosis), second, the “popular sector” represents contemporary influence (e.g., friends and family’s health beliefs, popular celebrity health claims, advertisements of health practices, internet dietitians, naturopaths, homeopaths, including an array of contemporary false claims such as “vaccine toxicity”, “immune overload”, “adrenal fatigue”, etc), and the “folk sector” represents traditional healing and wellness practices (e.g., healing, spiritual healing, natural healing, etc). These 3 belief systems can overlap, compliment and/or conflict with one another – thus, an implementation of successful public health policies are dependent on a deep anthropological and phenomenological understanding of the lived experiences of the population.

An example of the importance of documenting health beliefs is the case of low polio vaccine coverage rate in Nigeria where polio is endemic – there is currently widespread belief that polio vaccine is an American bio-weapon that sterilizes the Muslim population as a form of mass genocide. Similarly, in Japan where cervical cancer remains prevalent, HPV vaccine coverage rate remains at 0% due to rumours of HPV-vaccine induced anaphylaxis that is promoted by major media that lead the Japanese government to defund HPV vaccine due to low public interest in the vaccine.

Over the course of the next 9 months, I will produce an interactive map that would document health beliefs that impede vaccine efforts. A sample webpage is “Healthmap” (Figure 2), and the sources of documents are similar to that of “The Vaccine Confidence Project”. The former website documents infectious diseases and outbreaks through active surveillance (e.g., outbreaks are collected by mining keywords of specific diseases from select media on the web and then added into the registry) and passive surveillance (e.g., outbreaks are “added” by the public, usually health practitioners and Centers for Disease Control professionals and verified by the web owner). The latter website was established in 2013 to collect news and published articles on vaccine hesitancy. The “Atlas of Vaccine Hesitancy” will help researchers navigate health beliefs prior to implementing any immunization policy. It fills the gap of documenting health beliefs that underpins vaccine refusal in an easily retrievable way online, and it will be allow submissions of new entries from researchers, health professionals, and the public.

This website will be used by any public health policymakers, global health researchers, anthropologists, epidemiologists, etc – it can also be used by members of the public interested in learning about vaccine refusal and vaccine hesitancy. I envision to site to be a portal for knowledge exchange as well as a lens through which practitioners of Western medicine can adopt an understanding that vaccine hesitancy is not simply a deficit of scientific knowledge, but an interpretive construct of beliefs. Users can navigate with filters such as vaccine type, reasons for vaccine hesitancy, current or past vaccine refusal (e.g., users can type ‘1995’, or ‘1880’ to find the type of vaccine refusal at the selected year/decade). The search will yield a location of vaccine hesitancy, indicated by a “pin”. Clicking on the pins on the map will open up a short paragraph on the status of vaccine refusal in the region of selection, and links to journal articles, websites, news media, and other archival documents that have documented said vaccine hesitancy.

Figure 2. Health Map of outbreak alerts.

igure 3. Vaccine Confidence Project

Here comes the rambling. The system architecture of the “Atlas” should be pretty straightforward, it has a web frontend, a web backend, a data repository, a classification engine, and a crowd-sourced data acquisition API.  But here comes the problem – most of the maps I am talking about have million-dollar grants, with teams of 6 people. I am one person, with no research grants (Sherman Centre does provide grants for travelling and attending conferences for residences, but that’s aside the point). Because I’m not a programmer nor a GIS specialist, I’ve been playing around with map-making software for the past two weeks that allows me to not have to program, and hopefully get a sense of what option best suits my needs. We’ll go through the process of how this map is going built, for anyone who might be interested.

One of the easiest ways to build a crowd-sourcing map is using Story Maps developed by ArcGIS. To be able to use Story Maps, you will need an online ESRI account. An example of Story Maps is “Faces of Opioid” (See Figure 4), anyone from the public can contribute to the map by clicking “Add Lost Loved One” and upload a picture to the map. This is the first idea. There is no better way to create maps than ArcGIS, it is powerful, intuitive, has a rich online resource database for case-based learning, it also reads and exports all sorts of files (xls, json, csv) and from the cloud (Dropbox, One Drive, Google Drive), but it’s not free ($100CAD/yr for students). And I love free. So, I looked about for other map making tools.

Figure 4. Faces of Opioid

The second way to make maps and populate them is by Tableau. This one isn’t free either, but it is for students. There is no better way to make great beautiful looking graphics and charts than with Tableau, I have heard the director of Ryerson University’s Social Media Lab director Anatoliy Gruzd call it “Excel on Steroids” – and it is pretty much that. There’s functions to chronologically present your map, but it’s not built for crowdsourcing. The third way to build quickly build map is using Infogram, which allows you to populate the map by yourself – it’s pretty popular with infographic makers, and the interface is easy to use.

There’s also the possibility to build the map using Google My Maps. And after tinkering with it a bit, I found it to be the most customizable for crowd-sourcing and for input from my end into the repository. But I have not completely given up on Story Maps yet.

Before I sign off, I just wanted to share a site called Disease Map, which allows you to contribute to pinning your chronic or hereditary disease on the map, and connect with others with the same disease online. Figure 4 shows people with Anemia who have pinned themselves on the map.

Figure 5. Disease Map

I’ll be returning to show you the constructed Atlas in a few months’ time.

Posted in Blog Tagged with: , , , , , , ,