An animated look at the history of zero.
We have a Reconstruction problem. Not because Reconstruction failed, but because it succeeded in the very sense that many white Americans wanted it to succeed.
Source: higher education
Structuring a database is not an easy task. During this year of work, we have faced many challenges that have required from us great intellectual efforts and reflection. Nevertheless, I have heard from “digital humanists” and programmers that because we have a software developer, we are not making the database, that someone is doing it for us. The underlying argument is that we need knowledge on basic principle of programming such as HTML and CSS to claim authorship in the making-process. Having that programming skills today is helpful. However, that our participation on programming is limited does not mean we are not the main creators of the database. This blog shows some of the main challenges that make us -the historians- crucial for this type of project and it is, in part, an answer to technocratic point of views on the relationship historians and software developers.
First, the concept of the project –databasing baptismal records–, is ours. This project is not something that anyone could have imagined without the proper historical training. You need to know about sources, their internal logic, the institutions that produced them, paleography, and other language skills. It is important to decide the fields that can be extracted from the sources without violating the integrity of the documents. We have to respect historical concepts and to know that their meanings changed over time. We decided how to organize the fields in a coherent and hierarchical way. We need to translate our needs to programmers without historical training. We, historians, are the most important actor. Thus, HTML and CSS play a minor role to conceive the idea. The developing part is crucial, but should not be confused with the first step. This assertion is true for those cases where social scientists rely on programmers to materialize their projects.
We had important elements in our advantage when we started this project. First, the digitized copies of the original documents are available online. The project “Ecclesiastical & Secular Sources for Slaves Societies” (ESSSS) has digitized and posted online the parish records from Colombia, Brazil, Cuba, and Florida. Without this amazing repository, our database would have been impossible. These baptismal records are geographically, linguistically, and temporally diverse but, due to the centralized nature of the Catholic church, they are also homogeneous sources, regardless of language, period, and region. This circumstance makes them the perfect candidate to build a transnational standardized database. It makes also doable to move the data from the digitized documents to an accessible, searchable, malleable, and “cleaner” digital format. It sounds easier than it is though.
Defining the categories or fields that will be in the search tool is definitely challenging. Even when the documents are homogeneous, there is often new information showing up we need to decide if it deserves an individual field or not. Databases must have a limited universe of regular fields to make them functional. We restricted our variables to those that regularly appear in the documents and those which do not show up frequently are included in the field “Miscellaneous.” Deciding the fields is not the only challenge. Naming the fields is another difficult step. Take the example of race and ethnicity. Categories, language, and meanings of race differ over time and by region. For instance, the are sometime equatable categories of race from the Portuguese and from the Spanish-speaking world. Anglo-speaking regions have had different definition of race. In both cases, race categories are subjected to change over time. We do not want to violate the documents, thus, we kept race as it appears in the sources, including the original language. Something similar happens with African ethnic designations in the Americas. Across different regions, African origins are defined in every document as nations. We keep the term “nation” as it appears in the document, although sometimes these categories do not represent and ethnic identity that carried meaning in an African context. These decisions resulted after long discussions and after reading the most important historiography on the topic. There is always a great space for disagreement. The next post will discuss how we structured the fields in a relational diagram.
If you watch sports regularly, you’re probably familiar with the concept of “East Coast Bias.” Teams from places New York, Boston, and Washington, DC, can seem to dominate the coverage among sports media outlets, while West Coast teams, because their games are on so damn late for east-coasters, play second fiddle. The phenomenon was more […]
As part of the Historical Teaching and Practice program, I [Kalani Craig] presented three easily adaptable digital-history lesson plans that work nicely in single 75-minute sessions. These handouts provide the basic structure of the lesson plans without the images produced by students in previous iterations of those activities. Access resource here.
Sent from my iPad
Begin forwarded message:
From: H-Net Notifications <drupaladmin>
Date: November 6, 2015 at 2:21:24 AM EST
To: "mimber" <mimber>
Subject: H-Law daily digest: 2 new items have been posted
Greetings Margaret Imber,
New items have been posted in H-Law.
Table of Contents
From their earliest incarnations in the seventeenth-century, through their Georgian expansion into provincial and colonial markets and culminating in their late-Victorian transformation into New Journalism, British newspapers have relied upon scissors-and-paste journalism to meet consumer demands for the latest political intelligence and diverting content. However, mass digitisation of these periodicals, in both photographic and machine-readable form,…
What I want to do here is present something that scholars or digital history students could use to think about how one might make a map like this. For people interested in doing digital history, it may be useful to see the process and to get a sense of the kind of coding that is necessary to get usable data from a set of websites on the web.
Carr’s distinction between plain old “facts” and “historical facts” is a useful means by which to highlight the essentially, indeed radically, subjective nature of History.
Thanks to Ryan Baumann’s work of creating a concordance between geographic identifiers in the Pleiades Gazetteer of Ancient Places and the Getty Thesaurus of Geographic Names, Dan Pett of the British Museum was able to build on this work to incorporate these concordances into the Portable Antiquities Scheme database. Dan’s Nomisma-Pleiades-TGN concordance R script is on Github.
Dan then emailed the Nomisma listserv with a large CSV document of all mints in the PAS database, with associated Nomisma IDs, Getty, BM, Geonames, dbPedia, Pleiades, etc. I stripped away all of the mints that don’t already have Nomisma IDs so that I could upload the CSV into Google Sheets, which then makes it possible to import data from the Atom representation of this spreadsheet into the Nomisma RDF. I expanded all of the concordance ID columns into full URIs for the Nomisma spreadsheet validation process, and then successfully updated 721 Greco-Roman mints to add Getty, BM, Geonames, and dbPedia URIs as skos:closeMatch objects. Further, the spreadsheet import process parsed the dbPedia URIs to perform Wikidata lookup, enabling us to add further concordances extracted from Wikidata–including the Wikidata URI itself, plus GND, BnF, and Freebase identifiers. The Wikidata lookup also adds additional translations as skos:prefLabels in from article titles in other languages.
As a result, we have added more than a dozen new translations for Zeugma and a few additional URIs.
Posted by Ethan Gruberat 11:30 AM
Friday, October 23, 2015