Introducing myself and Raycasting 101.

Hi, my name is Matthew Davis, and I am the new Postdoctoral Fellow here at the Sherman Centre.  I’m generally not good at biographies or introductions, but I figured I’d take this opportunity to talk a little bit about myself, where I fit into digital scholarship (I think) as a humanist, and then more importantly a few practical tips that come out of some recent breakthroughs I’ve had regarding my slowly developing archive of transcribed poems by John Lydgate, Minor Works of John Lydgate.  Why minor?  Because I’m the editor, the programming staff, the designer, and basically everything.  It’s a one-man shop, and because of that I have a lot of experience with the sort of problems a lot of graduate students and early-career scholars who are trying to do this work with a laptop and Stack Overflow might run into.

I may also seem like a bit of an anomaly to be sitting alongside the 3d printers and clade diagrams. My PhD is in English literature, with a focus on late-fifteenth and early-sixteenth century drama, hagiographic works (especially those about Mary Magdalene), and material textuality.  Note that nowhere in there did I express a formal academic interest in digital anything.  That doesn’t mean it’s not there, but to me the kind of possibilities the digital humanities opens (I am consciously using the lower case here because I think capitalizing the term turns it into a discipline, which is another way of saying “this is for me, but not for you”) are in how they better help us to understand questions that have always been with us.  About how ideas are transferred over time, or (to give an example I created to help me with an article I’m hoping to get to writing soonish) how you might visually lay out who is speaking where to address the question of staging in a play that is incredibly unwieldy.  In essence, to avoid the suggestion that we should set aside all of those old concerns to chase the new hotness, since now computers can do it all for us. Because if you’ve ever programmed a computer to do anything, you quickly learn that computers are dumb. They’re great at patterns, but not so great at explaining why those patterns matter.  And that’s the sort of thing I care about, when you get right down to it.

I’m also interested in how we can do the work of displaying real things online better, and that’s where the Minor Works of John Lydgate site and the practical tips come in. Because that site all started with a version of his Testament and Quis Dabit Meo Capiti Fontem Lacrimarum (“Who will give my head a fountain of tears”, also known as The Lamentation of Our Lady Maria)  in a parish church in a village in East Anglia called Long Melford. The thing about these poems is that they’re not in a book.  They’re painted on the walls of a chantry chapel. More than being painted on the walls, they’re actually different in some ways than the versions that were written down in books in the fifteenth century — at least the ones we still have.  But none of the standard editions of these books acknowledge this about these poems. I have another article coming out that talks about the poems in the chapel and speculates somewhat about why they were altered and for what purpose, but even including the poems in that article doesn’t give people the full story.  Because the chapel is a three dimensional thing.

Think about it.  If you walk into a room that has writing on it, you don’t experience that writing in the same way that you might if you read it in a book.  The physical space does something to your experience of interacting with the text.  So the question becomes how to give people a sense of that while acknowledging that it’s only a sense of that space in Long Melford? You put the poem up online.  But it’s not enough to simply put up pictures and text.  That’s like giving someone a deck of cards when what you really want to do is glue the cards together in a cube and punch a peephole in the side.  Yet that card deck approach is how most sites, including Minor Works of John Lydgate, approach medieval texts.

Screen Shot 2016-08-28 at 10.50.02 PM

What I wanted to do is give people the ability to switch between the card deck, which does have its benefits, and a model that will show how the various panels (or cards, if we keep up the analogy) fit into the overall whole.  So using a tool called Agisoft Photoscan I built a three-dimensional model of the chapel from hundreds of pictures I took while there on research (if there’s interest, I’ll create a post describing the process I use later on in the semester). The model was complete enough that people could see how the various panels related to each other, and I could go to the model from the individual panels via that “view model” link above.  I needed a way for someone to be able to click on the panels in the model, though, and go back to the panel they came from or to any other point of interest in the room.

I use a free javascript library called three.js to display the model Photoscan creates. That display still has some problems, especially in allowing you to navigate the model and in making it clear that it’s still loading, so I don’t want to do a full description of how the model code works yet. What it does have, and the tip mentioned earlier, is a function called Raycaster. In the Javascript code that displays the model it gets implemented, in part, like so:

var raycaster = new THREE.Raycaster();
var mouseVector = new THREE.Vector2();
function onDocumentMouseDown( e ) {
e.preventDefault();
mouseVector.x = ( e.clientX / window.innerWidth ) * 2 - 1;
mouseVector.y = - ( e.clientY / window.innerHeight ) * 2 + 1;
raycaster.setFromCamera( mouseVector, camera );
var intersects = raycaster.intersectObjects(scene.children, true);
}

So what that does is create a new Raycaster object, which is in essence a line extending out from the imaginary camera, to whatever point someone clicks on in the model. Specifically making it dependent on a mouse click is what the onDocumentMouseDown function is doing. The mouseVector variable is determining where exactly the x and y coordinates of the mouse are, and the setFromCamera function is what actually lays out the ray between the camera and the coordinates determined by mouseVector. The last item, the intersects variable, comes into play if any part of the model is between the camera and the coordinates mouseVector indicates.

At this point you have a camera, a ray extending out from that camera, and the coordinates of a point in the way. But none of this really tells you anything. If you were to use Javascript’s console.log function to display the contents of intersects variable, it would look something like this:

distance: 8.614119579522454
face: Face3 {a: 633771, b: 633772, c: 633773, normal: Vector3, vertexNormals: [], …}
faceIndex: 633771
index: 633771
object: Mesh {id: 10, uuid: "737A526B-08CB-420C-B7F1-4E5C36CAE6B8", name: "", type: "Mesh", parent: Group, …}
point: Vector3 {x: 7.097271187313979, y: -0.7812168570233813, z: -1.8331882520774139, isVector3: true, …}
uv: Vector2 {x: 0.40269057008505815, y: 0.07953523483060919, isVector2: true, …}

So intersects is a Javascript object consisting of a number of variables conveniently labeled so we can use them. The particular subset we’re interested in in this case is point. The point variable has x, y, and z coordinates, which means it tells us exactly where it is in three-dimensional space.

Once we have that, we can click on the four corners of a panel. This gives us the coordinates of a pyramid that originates with the camera. Those coordinates, along with the url of the panel they represent, can be stored as an array in another variable, with x, y, and z indicating the minimum and maximum value:

var panel =
[
{'url':'http://www.minorworksoflydgate.net/Testament/Clopton/sw_test_1.html',
'x':[-9.38, -7.47],
'y':[6.80, 7.49],
'z':[-8.18, -8.98]
}
]

Once you have all this, then it’s just a matter of creating a function that sends you to the correct panel, with it’s information, should you click on a space:

function clickURL (intersects, x1, x2, y1, y2, z1, z2, URL) {
if (intersects[0].point.x > Math.min(x1,x2) && intersects[0].point.x < Math.max(x1,x2)) { console.log ("if statement one condition met.");
if (intersects[0].point.y > Math.min(y1,y2) && intersects[0].point.y < Math.max(y1,y2)) { console.log ("if statement two condition met.");
if (intersects[0].point.z > Math.min(z1,z2) && intersects[0].point.z < Math.max(z1,z2)) { console.log ("if statement three condition met.");
window.parent.location.href = URL;
}
}
}
}

And that’s all there is to it.

I know it seems like either not very much or a whole lot, depending on how familiar you are with programming and the three.js library. But it’s something that I’d been trying to figure out off and on for a few months now, since the raycasting functionality was much more difficult in three.js until recently. I’m really happy to have it out of the way. That it took so long also speaks to a problem with digital scholarship — so much of the work done for it is like the lower portion of an iceberg; it’s invisible and largely ignored until it tears the bottom out from your boat. Next time I write a post I’ll talk a little about the implications of that invisibility.

Posted in Blog, tutorial Tagged with: ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*