In December we held our second CC Technology Summit at MIT in Cambridge, MA. I think the day provided a great perspective on what we’re doing at CC and how others are building a real community around it. If you weren’t able to attend, we now have audio and video available. And if you missed the first one, the video for that is available as well.
We’re currently thinking about plans for the next event; if you have feedback or suggestions, email them to email@example.com Comment »
Nathan Yergler proceeded to wrap up the tech conference with some humble predictions about where CC tech will be headed.
The following are a brief list of these future initiatives:
- Using RDFa to publish metadata in a distributed fashion
- The Next Generation of MozCC
- Making attribution easier
- Universal Education Search
- CC0 & Public Domain Assertion
- OSCRI / CC Network and creating an interoperable registry with Safe Creative and Registered Commons
Rich Pearson began with an introduction to FairShare, which is a soon-to-be-launched free tool that allows creators to claim their works and discover how their works are shared and remixed. FairShare is open and supports multiple licensing standards. He then stepped through a demo of FairShare.
The second part of his talk was a brief overview of some CC licensing statistics from the web. They found at least 56 million licenses uses (excluding a deep search of images and Flickr). Another interesting point was the long tail of jurisdiction-specific licenses: nearly 75% of licenses were generic (unported), with all other jurisdictions having a no more than a few percent each.No Comments »
Jonathan Rees of Science Commons discussed the open source knowledge management system that Science Commons is developing. He discussed the importance of interfacing different stores of data and knowledge, and elucidated how Science Commons is making progress on these issues. In the process Jonathan gave six layers of what comprises an interface: permission, access, container, syntax, vocabulary, and semantics.
The focus of this project is on data integration, and the importance of data integration is reducing the huge transactions costs of using different data stores which have been assembled for different purposes. Data integration does happen, but at huge expense of effort; it is hard, complex, and fragile;”glue” is necessary at all levels, and the process is manual and error-prone.
By developing and testing the whole interface stack for scientific data, the data integration problem becomes vastly easier to understand, browse, search, consult, transform, analyze, visualize, model, annotate, and organize data.
Jonathan closed with a call to action is to “choose, promote, and nourish sharing solutions at every level in the stack”.No Comments »
Ben Adida is back again from the first tech conference with a new talk about RDFa.
First he gave a brief review of RDFa: there exists a huge chasm between the human web and the data web. RDFa addresses our need to bridge this gap. We want machine-readable metadata so we can use computer programs to answer simple questions about a work to save on time and effort. He then moved on to explaining ccREL, the Creative Commons Rights Expression Language. There are four principles for publishing in HTML: 1) visual correspondence, 2) don’t repeat yourself, 3) remix friendliness, 4) extensibility and modularity.
For the main portion of his talk, Ben went over the events of the past six months regarding RDFa adoption.
- April-May: Digg deployed RDFa.
- June: RDFa goes W3C “Candidate Recommendation” with around 12 implementations (parsers).
- June: Open Archives Initiatives supports RDFa; UK National Archive uses RDFa.
- September: Yahoo SearchMonkey deploys RDFa support.
- October: RDFa goes W3C Recommendation.
- November: CC launches the CC network in November; Drupal announces roadmap for RDFa integration.
He concluded with a demonstration of sample SearchMonkey functionality that grabs CC license metadata from search results and displays that information on the search page.
What’s next?, asks Ben, with a strong disclaimer that this is just a small glimpse of what is possible. He points to HTML 4 and 5 integration, the simplification of common cases (not having to keep redefining common namespaces), finding common ground with the microformat community, and better search and in-browser tools.
The question Ben asks you to take away is this: What are you waiting for to consume and/or publish RDFa?No Comments »
David Torpie of the Office of Economic and Statistical Research at the Queensland Treasury gave a talk on “Government Information Licensing Framework: a multidisciplinary project improving access to Public Sector Information.” This is a project to give greater access to Australian government data, to make government more transparent, and in doing so to develop a standard set of terms and conditions that are broadly applicable to other government contexts.
David first answered the question of why the Australian government even needs to worry about licensing its works. In Australia, unlike the United States, the government has copyright over works produced by government agencies. Australian copyright law also extends to less-than-creative works (such as telephone directories), which increases the importance that public licensing is clear and simple.
The solution developed at the Queensland Treasury is “digital license management”, or DLM. DLM is a technology to embed license metadata into documents and other works, developed in Java. Benefits include ease of linking from data to license, and finding information based on its license. DLM was developed before a suitable alternative was available; liblicense now provides a similar functionality in C. The team developing DLM is working with CC’s tech team for collaboration, and initial indications are that a dedicated Java tool may prove very useful.No Comments »
Oshani Seneviratne, a student at MIT, presented her work on “Detecting Creative Commons Attribution License Violations with Flickr Images on the World Wide Web,” which she completed as a summer project. She summarized her motivation, the use of CC licenses in Flickr, system design, and future directions for the project.
CC provides free copyright licenses but does not provide a means of detecting violations of the terms of these licenses. For instance, someone could use a CC-BY photo on their home page without providing attribution. Oshani’s project demonstrates a means of searching Flickr to detect violations like these. The implementation uses the Flickr API to find images and license data, and detect whether or not attribution is given on the web page where the image is re-used. One limitations is that the validator needs to know the image URI in order to search Flickr.
Future directions include extending this project to other licenses, determining the feasibility of looking for non-commercial use violations, and checking for the Share-Alike condition.No Comments »
Mario Pena of Safe Creative, Joe Benso of Registered Commons, and Mike Linksvayer of CC gave a talk on “Copyright Registries 2.0″ as a continuation of the registration conversation we had at our first tech summit in June.
Mario began with a summary of registries and how they should work: they must provide pointers to works, and they must facilitate the sharing of relevant information. He pointed to RDFa and ccREL as examples of technologies in this sphere promoting interoperability. He also mentioned the Open Standards for Copyright Registry Interop as an example of the work being done to help foster online registries interoperability and standardization.
Next, Joe discussed what he sees as necessary for registries moving forward. The big point he made was that Registered Commons feels a registry authority is a necessary condition for registries to be successfully implemented. He started with a brief history of Registered Commons and named the features they provide, including use of the CC API, timestamping of works, and physical identity verification. He finished with the need for an authority: to allocate namespaces, appoint registries based on criteria, identify entities to be certified, etc.No Comments »
Creative Commons CTO Nathan Yergler discussed the Creative Commoner network, which was developed beginning in October and is still under active development. The network allows creators to collect references to their work in one place — to act as a registry. It also serves to bring people in to the CC community, and aid interoperability and connection of existing data and works.
The CC network sports personalized profile pages, OpenID, and a simple registry, which Nathan discussed in turn. Creative Commons can build layers of trust by validating a user’s “confirmed” name from PayPal transactions, meaning that license claims are more legitimate than otherwise. But there are issues, such as name changes and incorrect or outdated information from PayPal.
OpenID is an open single sign-on standard which CC provides with a Commoner account. There are issues around this as well, such as a need to trust your provider. Nathan laid out the various ways CC is working to mitigate these issues.
But “the meat of the CC network” is in the work registry. As yet it is a simple implementation. Reciprocal claims and validation are key, where the registration learns about the validity about a work registration claim based on the presence of similar license data on that page. This shows that the user making the claim does indeed have the ability to edit the work in question.
Future developments include better identification of works and metadata, registration of feeds, the ability to follow creators in their subsequent works, and general future efforts exploring registry technology.No Comments »