Read more ›
The 2015 Horizon report identifies the proliferation of Open Educational Resources (OER) as one of six trends that will accelerate technology adoption in higher education. As OER is gaining traction across campuses, the report predicts an increased acceptance and usage over the next 2-3 years. However, the broader proliferation of OER hinges on effective leadership: “While data shows that some faculty are integrating OER on their own, institutional leadership can reinforce the use of open content”. As Tony Bates observed: “There is a lot of evidence to suggest that the take-up of OERs by instructors is still minimal, other than by those who created the original version”.
How can institutional leadership foster the use of OER? Which strategies do stewards of open education deploy to disseminate best practices and high-quality material? It was my pleasure to talk to Francesca Allegri and Bradley Hemminger, who are currently implementing an OER initiative at the University of North Carolina at Chapel Hill.
Fran: One thing we identified from our survey was that successful programs included the library and the faculty development center as critical partners. Our committee felt that, for a number of reasons, the best approach on our campus was a slow growth one, where we could build support on campus from campus units and faculty, have guidelines available (implemented here as a library resource guide http://guides.lib.unc.edu/OER), be sure the infrastructure was in place (for instance having an OER collection in the Carolina Digital Repository with an easy submission mechanism), and develop metrics for measuring success before we begin to promote OERs on campus.
We will begin to officially promote OER support on campus later this year (Fall 2015), including an award program that will annually help a small number of instructors re-examine their courses to incorporate more OERs, or to develop publicly sharable OER content for their courses. The award program will provide stipends to help offset the costs involved with re-envisioning courses and developing open course content materials. The UNC Press is connected to this effort by looking at ways to support authors of larger content pieces (like full textbooks).
Fran: Librarians are implanted with a sharing chip! All of the instructional materials we create here at the Library are freely available. When we receive requests to use or adapt content we have developed, we only ask for attribution. Unless there is some requirement from an external collaborator to do otherwise, that is how we approach our teaching materials. For me personally, I love to find OER content that I or my colleagues can use or adapt. Much better than recreating the wheel.
One role librarians will play in the UNC-CH OER initiative will be helping faculty find relevant, quality OER’s they can consider using in their teaching . This is a key way that the subject specialist librarians across the libraries can help faculty adopt use of this content. This may also inspire faculty to create or share curriculum materials they develop if librarians identify there is a lack of suitable content in their area of teaching. The librarians can also support faculty sharing efforts, for example, alerting them to the Carolina Digital Repository and submission process, assisting with Creative Commons licensing, and similar help that can preserve faculty’s desired author’s rights and make their contributions discoverable by their peers and students. Contacting a librarian early in the process could save the faculty member’s time, also.
Fran: We also identified enabling factors. These include
If you want to learn more about our initiative, consult the UNC-CH campus page on OERs for more information. Brad Hemminger is an associate professor at the School of Information and Library Science (SILS) at the University of North Carolina. He has a joint appointment in Carolina Center for Genome Sciences. He has a number of areas of research interests including digital scholarship, information seeking, information visualization, user interface design, digital libraries and biomedical health informatics. He has published over 85 papers, served on several international standards committees, and consulted for a number of companies in the areas of visualization and user interfaces. He serves as a reviewer for over a fifteen journals and conferences. He currently teaches scholarly communications, databases, biomedical health informatics, information visualization, and data science. He is director the Informatics and Visualization Lab at UNC, part of the Interactive Information Systems Lab, and directs the Center for Research and Development of Digital Libraries. His current research interests are focused on developing new paradigms for scholarship, publishing, information seeking and use by academics in this digital age. For more information see his website http://ils.unc.edu/bmh/.
Francesca Allegri, MSLS, is Assistant Director (Interim) of the Health Sciences Library, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina. As Assistant Director, she is determining and implementing user focused strategic initiatives, allocating resources, and advising the Director in these areas. She also is Head of User Services, Health Sciences Library. She manages a strong liaison librarian program and single service point (20 FTEs) and is part of the library’s senior management team. She is also a graduate of the National Library of Medicine/Association of Academic Health Sciences Libraries Leadership Fellows Program. Prior to that, she held two positions in the Health Sciences Library’s administrative unit managing professional librarian recruitment, staff development, planning, and institutional data collection and reporting. She also served four years as Department Head of the education department at the Health Sciences Library and has had leadership experience in campus organizations, such as the University Managers Association and the UNC Network for Clinical Research Professionals. Earlier, Ms. Allegri served as Assistant Head at the University of Illinois Library of the Health Sciences in Urbana, Illinois. She holds an MSLS from the University of Illinois at Urbana-Champaign, Urbana, Illinois.
In my last posting on text mining, I described how to collect data from Twitter. In this post, I will describe how we can summarize a large set of tweets on a certain topic - for example the latest SITE conference.
Background: Giving structure to your data
Text data, such as tweets, comments or posts usually comes with limited structure, as compared to scores on likert scales. To visualize and quantify the data we have to give it structure in the first place. Suppose we have a character vector as the following:
 "I am a member of the XYZ association"
 "Please apply for our open position"
 "The XYZ memorial lecture takes place on wednesday"
 "Vote for the most popular lecturer!"
What is a character vector? You can think of a character vector as a container of all text pieces. Each piece represents the text from an individual, and is assigned a number. You can access any piece by using its given number. This type of data is easy for humans to read, but not for machines. Machine prefers the same information structured in the following way:
A text file structured in this way is called document-term matrix. Each row in the matrix represents a word, while each column represents a document, which refers to all the texts from an individual. Each element in the matrix represents the number of times a particular word appears in a particular document. You may have noticed that all texts have been converted to lowercase in this matrix, while some words, like “a” or “the” are not shown up in the matrix.
To convert the tweet texts you collect into a document-term matrix, the following steps are usually necessary:
Sample Data - Tweets on #siteconf
Did you miss your favorite AACE conference? Would you like to find out what predominant topics people discussed? We collected 709 tweets using the hashtags "#siteconf".
Step 1: Word Clouds
To take a quick look at our data, an initial visual representation with world clouds is helpful.
Step 2: Cluster Tree
A more structured way to explore the data in an associational sense is to look at the collection of terms that frequently co-occur. This method is called cluster analysis.
Cluster analysis is a way of finding association between items and bind nearby items into groups. A typical visualization technique is a tree diagram called dendrogram. The most common cluster analysis include K-means clustering and hierarchical clustering. K-means clustering require you to specify how many groups you prefer to have in the result before the analysis, while hierarchical clustering doesn’t have this requirement.
The density and shape of the dendrogram may vary depending on the sparsity. The above one is the dendrogram on sparsity .95. It is interesting that when people tweeted using the hashtag “#msueped”, they also tended to use “#site2015”. “#msueped” stands for Educational Psychology and Educational Technology from Michigan State University. You can tell that many people from this program went to SITE 2015 conference.
Did you gain a sense what the SITE community is talking about? Data visualization is certainly helpful to make sense of large datasets as it allows you to gain an overview from an elevated perspective. However, don’t mistake a set of images for the real thing. If you attended SITE 2015 in Las Vegas, your first hand experience is likely to be totally different and certainly more in-depth. Also keep in mind that while social media is becoming ever more popular, Twitter users are still only a sub-group of the whole audience.
No approach is neutral in its analysis: Understanding the tools that we use helps us to interpret seemingly obvious connections more carefully. If you want to explore how we produced these visualizations use our sample data set with instructions.