DEC
09
2011

DML Competition header

For the better part of the last year I’ve been working (with several others) to develop the Innovative Communication Design program at UMaine (yes, that’s a temporary site).  The classes will start in January, but when it’s fully approved it will be an online, asynchronous graduate certificate program that focuses on creative problem solving and applications of technology.  I’m the tech, more or less, but other faculty members have backgrounds in advertising, graphic arts, video production, and web development.

One of the base assumptions of the program is that ongoing changes in technology require constantly reevaluating how ideas are communicated, and we’re taking that to heart by finding new ways to communicate within the program.  That’s reflected in a few ways: we’re creating custom course delivery software that supports discussion, peer eduction, and collaboration; we’re recording many of our classes as group discussions rather than individual instructors; and we’re recognizing student accomplishments by giving them badges that can be tied to online resumes, social networks, or personal web pages.

The badge idea has been one we’ve discussed since the beginning, but our launch timing happens to coincide with the DML4 competition Badges for Lifelong Learning.  DML4 is an initiative of the Humanities, Arts, Science, and Technology Advanced Collaboratory (HASTAC) and the Mozilla Foundation with support from the MacArthur Foundation that is looking at ways to represent both traditional and non-traditional education with digital badges.  I submitted a proposal for our courses and plan to integrate badges into our software and recognition/grading system and we’ve now been selected to move forward as a stage one winner!

The goal here is to extend ICD’s already unique structure of offering multiple short, hands-on classes to include a granular recognition of students’ accomplishments as well. In addition to the normal rewards a student in an ICD class would get they also receive digital badges that can be displayed on an online resume, web site, or Facebook page. The badges are meant to convey more specific information about what students have done than would be shown by just granting them a certificate or degree at the end of their program, which is particularly useful for potential employers who might now know the range of skills embedded within a monolithic degree. Changing the recognition metrics in higher education also exposes aptitudes that are often hidden, including soft skills like teamwork or creative insight, and allows students to take control of how and where their accomplishments are shared with the digital world.

ICD’s badges will be built using the new Open Badges platform from Mozilla and integrated into a new online course delivery system that is currently under development.  The Open Badges platform allows students to be able to display the badges they’ve been awarded on their own web sites, but not just as static graphics – the APIs used to share badges also embed metadata that adds the ability to securely validate the authenticity of the badge with the organization that granted it, creating a distributed but trusted infrastructure for recognition.

I’m really looking forward to moving ahead with this project, and I think that – combined with the course delivery software we’re developing and the general methodology of the ICD program – there is a lot of potential in not just the material we’re teaching but also the processes we’re using to teach it.

SEP
11
2011

The Without Borders VIII: Breaking Ground exhibition, which includes my augmented reality piece The Variable Museum, has been showing in the University of Maine’s Lord Hall gallery since August.  We’ll be having a closing reception on Sep. 15th from 5:30-7:30 for anyone in the area who wants to stop by and take a look at it or any of the work by the other five artists represented in the show.  The Variable Museum is also scheduled for another iteration/showing at the Pixxelpoint festival in Slovenia this December.  From the press release:

The Variable Museum is built using augmented reality (AR) techniques that insert digital objects into a visitor’s view of the real world around them in real time. Viewers wear a pair of glasses with built-in cameras that record what their eyes would normally see and send that video to a computer where 3D objects are added before it is displayed inside the glasses. An optical tracking system allows visitors to walk around and see the objects from different perspectives, just as if they were physical artworks in the gallery.

Unlike the physical artworks in the gallery though, viewers of The Variable Museum do not all see the same objects. Each person sees something different, and only by talking to the other visitor’s can they discover what other people see and uncover the common ties that bind the artworks together. The Variable Museum uses AR technology not just to show objects that don’t exist, but also to show them in a way that would not be possible in the real world.

…and I’ll throw in a gratuitous shot of the gallery even though my piece is around the corner and not visible.

WoB VIII gallery shot

SEP
06
2011

The MFA is in

Last month was pretty much a full slate for me between the end of MoJo (seen below), the installation and defense of my MFA thesis (successful, more on that soon), and shipping off the review text of a book (thankfully covered mostly by my co-authors due to the other two).  Following those three events that all hit deadlines within a week of one another it was time for a break, so I took one.  Now, though, I’m back and things aren’t much slower than when I left.

The most immediate news is that Jon Ippolito and I are featured participants for “Mobility Shifts,” part of the iDC (Institute for Distributed Creativity) forum (New York, New York) September 2011, where we’re leading a discussion on the community underpinnings necessary to promote DIY learning.  The big question on our minds is whether and how DIY education can be enhanced by crowdsourced feedback.  Sources of information like Khan Academy or Open CourseWare may provide data, but they suffer from a lack of feedback and deeper contextual understanding.  If there are ways to generate evaluatory feedback without degrading into lectures or mobocracy that may help DIY students learn faster or gain a more thorough understanding of the subject at hand.

Many thanks to Trebor Scholz and Caroline Buck for inviting us to contribute to this discussion.

The contributions will be archived at https://lists.thing.net/pipermail/idc/.

AUG
05
2011

Re:Poste is a 3rd party commenting system designed to allow rigorous discussion on the events reported in the media. Conceptually, it is based on the idea of peer review. Articles can be annotated and rated for quality and accuracy by a community of commenters. The best articles float to the top as part of an aggregation system that recontextualizes articles away from their source sites and into event-specific groups. Eventually, these curated news feeds will become fodder for new primary source material as discussions and interviews are introduced into the event feed in response to Re:Poste itself.

Re:Poste deployment comes in four phases: aggregation, interpretation, curation, and deliberation.

Though they all feed off each other by sharing data, each phase of Re:Poste is largely a stand-alone application and they can be deployed incrementally. The aggregation phase is similar to existing services like Reddit or Google News, except that articles are algorithmically (with some pruning from humans) grouped by specific events rather than by category. Given the success of existing aggregators and the twist Re:Poste introduces, phase 1 has value even before the heart of the system is added in phase 2. Pragmatically, the aggregation phase may be the most important piece to Re:Poste’s ongoing viability because it is where brand loyalty can be built and revenue produced.

Phase 2, interpretation, is where Re:Poste begins to really differentiate itself from other systems. A user on any media site only has to click a button–depending on the browser, either a bookmarklet or a plugin will work–and the article in front of them will be reformatted and dropped into Re:Poste’s commenting interface. The transformation is technologically not much different than existing techniques used by the Readability application, but Re:Poste adds functionality in addition to making the site easier to read. Like Readability, the source page doesn’t require any special markup or changes so compatibility is automatic–though it can be helpful to add some semantic markup if the media site decides to explicitly support Re:Poste.

Users have the choice of commenting on an entire article or just a part of it, as indicated by the hierarchical bar to the left of the text. When a user clicks on one of the indicated comment regions the column on the right displays annotations that other users have added to that section. In addition to the text comments, users will have given that comment region a rating for accuracy and completeness (not agreement, though certainly that will influence ratings to some extent). The amount of vertical space taken up by each class is a weighted total of the ratings given to that section of the text; in the example above, the weighted vote says that 72% of users believe that line to be completely inaccurate and 28% think it is more inaccurate than accurate. No true neutral option is given, forcing anybody who want to have a say to choose a position to stand behind.

The commenting system is critical to Re:Poste’s success. A commenting community without regulation will quickly devolve into trolls and flamewars, and the goal of Re:Poste is to apply academic levels of peer review which–snide remarks aside–try to avoid that kind of discussion. To combat this, Re:Poste uses a game-based trust metric. Users adding an annotation are also forced to claim a level of expertise on the subject of the article, ranging from no expertise to special expertise on the specific event in the article. If other credible users find this claim to be bogus and the annotation is inaccurate, the author’s credibility will drop and their influence on overall ratings of other comments and articles will drop as well. Lesser claims of expertise have less severe penalties, but also count less toward the weighted article rating.

Phase 3, curation, brings the rating data generated by comments back into the initial aggregation interface. Now, though, Re:Poste is no longer just a simple aggregator; it reorders headlines and highlights the articles that that best tell the story of the event at hand. No one news organization is privileged over another as individual articles are brought to the fore by the community. Ideally, balance can be found by surveying different media sources and users will find themselves outside of the echo chamber that tends to form around individual news sites.

Phase 4 takes the credibility fostered by the previous stages and leverages it to begin to produce new content focused on an event. With the curated news feed as a jumping off point, new online discussions with experts and principles of a story are held. The results of those discussions are archived and introduced into the event’s news feed, providing new primary source material to lend texture and context to the event.

Each of the four phases is its own semi-autonomous service, taking input data and generating new information about that data.  Since each of them could stand on its own, that is how they will be designed. The four phases are independent pieces of software linked through APIs.  Better yet, since the APIs are open the four primary phases of Re:Poste can be augmented by new interfaces and modules designed by others.  Want a Twitter stream of comments?  Go ahead and build one. Think a specific commenter is full of good information?  Make an interface that filters by author instead of event. Re:Poste’s internal flexibility is also an external asset, so the power of its commenting and trust systems can be applied wherever and however users think they make the most impact.

Re:Poste’s target is not limited to a specific newsroom or journalist; instead, it assumes that journalism is not definitively authoritative and directly addresses the needs of democratic society that journalism is supposed to support. With the modern multitude of news sources, it is possible to gain a larger view of an event than any one source can provide. Re:Poste, while privileging articles from established sources, also subjects them to critique and believes that end users have valuable contributions to make. When all informed voices are heard, everybody benefits.

JUL
29
2011
Space Shuttle Pawned?

An AP content insert from Shazna Nessa's lecture. I'm relatively certain that she didn't mean to suggest we pawn the space shuttle, but ads in news stories are funny like that.

 

Week 3 of the MozNewsLab had the tech evangelists step aside and the grizzled veteran news producers come to the fore. Shazna Nessa, Mohamed Nanabhay, and Oliver Reichenstein all gave us a look inside their respective histories in hauling journalism into the digital era, whether it liked it or not. While all of them had a lot of insight on working within the newsroom (I have to say, Mohamed in particular hit one out of the park) I’m not sure how much relevance these talks have to my project in the lab.

Re:Poste is intended to be a third-party, guerrilla, interventional system. If I have to work within a newsroom to get it running then I’ve clearly done something wrong. This is not to say that collaboration wouldn’t make it easier or more efficient to build the system or that I couldn’t learn a thing or two by hanging around the people writing the articles, as I clearly could.  However, since Re:Poste is intended to exist independent of news organizations building it within the walls of any of them would be somewhere between a conflict of interest and a fundamental conceptual flaw. My interest is in a community that discusses, moderates, and improves upon news, not a community that creates it.

I don’t appear to be alone in this interest, either.  The MozNewsLab participants have (almost) all put up double-tweet length descriptions of the projects they’re developing as part of the lab. I’ve put together a collection of ideas from that list that have similar goals to Re:Poste below, along with some commentary. (And also completely blown the max-500-words rule that MozNewsLab imposed on these posts in the process, though in my defense I put them in a neat little tab-thing and many of them are just quoted from other people.  Oops.  Hey, you wanted hackers, did you really expect conformity?) As my friend and frequent collaborator Jon Ippolito often says, “cheating is the pedagogy of the Internet”. Let’s get started.

http://www.ajennings.net/blog/

??? (Not yet unveiled)

Make commenting on and discussing news articles (actually all web pages) a feature of web browsers. – Unify the commenting experience across the web – Promote openness. Users control their own data – Route around censorship – Encourage diverse thinking

There’s not much public on this one yet so I can’t say a whole lot, but it sounds like plugin-mode Re:Poste. I’m interested in what is meant by “Users control their own data” and “Route around censorship” though, so I’m eager to hear more.

http://corrigo.org/

Corrigo

corrigo user interface

It all began with that sticky idea: Geek comedian Tom Scott couldn’t stand “dodgy” journalism anymore. So he created some “warning labels” to put them on free papers he found on the London Tube.

It’s clear: If you can put stickers on newspapers, it should be possible online more than ever. Studying online journalism, we made this the subject of our diploma thesis. For three months, we analysed quality (control) in German and American journalism. We learned a lot about fact checking, accuracy and (failed) attempts to involve the public in media accountability.

Now we put all these learnings together and conceived a proper service: corrigo

It helps you flag and correct factual errors in online media – directly at the article.

In motivation and conception, Corrigo seems to be a perfect mirror of Re:Poste. Corrigo was created as a thesis project by Tobias Reitz and Kersten Alexander Riechers so there is a lot of information on the idea, but unfortunately I don’t know German so I can’t read the paper and am stuck with interpreting the pretty pictures.  There are a few things in this interface sketch that are worth thinking about (or stealing outright):

- Switch timeline: I’m kind of intrigued by this as I’ve never thought of applying Re:Poste to an entire site at once. I’m not quite sure how it would work, at least if I actually included a granular timeline of comments across all articles. Maybe it becomes an entry point with article-level links and and an overview of each articles ratings, kind of like the Re:Poste home page was in my original version but embedded as part of the app chrome to allow article-to-article navigation and allow browsability within the Re:Poste interface.

- Categorizing errors: I think this could be important.  I might not go as far as allowing end-users to correct typos (what would be their interest in doing so? how much value does it add to the comment thread?) but classifying an objection beyond just “this part is bad” would be useful.  The problem is in finding a balance between an informative detail and a list of categories that takes more time and energy to manage than it provides benefit.  “Factual error” and “missing source” seem like good options, but what else would there be?  If those are the only two options, it seems like it would just be one more button to push in the interface.

- Verified account: My initial version of Re:Poste buried personally identifiable information about a poster on a second page that users had to click through to get to. This was intentional–Re:Poste is not intended to be about the people, just the information, and making it easy to identify a commenter just makes it more likely that the people will be discussed instead of the article. But a verified account would have value in the system because it adds credibility to have a real name behind the post. This seems like an excellent idea, but one that I would not build into Re:Poste’s primary interface. I’ll leave that as only exposing information about the poster’s calculated credibility, not their ID.

- Badges: Badges are hot right now, with good reason. I think this falls into the same category as verified accounts in that the data coming from a badge (“best commenter award, July 2011!”) might be worked into credibility calculations but it wouldn’t go on Re:Poste’s main interface.

- Sharing: Re:Poste’s original genesis was before the age when every page had a share button, so it was never put into the interface.  It should be there, though it should be on an article level basis to link to the article+comments, not at the comment level.  Another button for the chrome, I guess.

http://howzitgaun.com/

Discursv

The problem – Online comment threads can be a haven for discussion, providing the journalist a link with their audience, or they can be troll filled arenas of a abuse, offering little to the user or author. What’s more, on large sites, it can be difficult to follow the many threaded discussions, and filter the signal from the noise.

The zeitgeist – There are many sites just now which allow comments which are perceived to be trolling to be downvoted, or moderated by staff. This is useful to an extent, but on sites with a political lean, for instance, voices from the opposition point of view can be drowned out.

The discussion – We began by thinking about the roll that trolls play on discussion threads. In one sense they exist simply to annoy the other posters on the thread. But we began considering how this division could be leveraged to encourage discussion, similar to the ancient greek philosophical concept of sophism, the teaching of rhetoric. If the contrary nature of online commenting could be harnessed, the ensuing discursive nature of the comments moves the article to a encompassing view of the news being discussed.

The idea – Comments could be rated not simply as an ‘upvote’ or ‘downvote’ but on a multivariate basis:  support/retort, useful/useless etc. Users would be asked to consider not only how good the post is, but how it directs the discussion.

The opportunities – multivariate rating allows new forms of data visualization, giving an idea of the direction and flow of the discussion. It also allows the author or third party, to curate the discussion.

Trolls. Personally, I’d be happy to just stamp them out completely by hiding posts below a certain credibility level. But it’s interesting to think of how they can be used to improve the overall quality of comments. They do occupy a definite place in the online ecosystem, though not a great one–the final stage of life for most online forums is to have nothing but a core of old users trolling and counter-trolling each other based on years of shared experience.

Thinking about how trolls generally work may be useful, though.  In general, there is not a lot of originality in their posts.  They generally get their talking points from other sources and just throw them up against each other, like gamblers at a cockfight. What if, instead of tagging by word, comments were tagged by argument? Those arguments could then be filtered, hiding anybody who writes “Obama is a sekret Muslim.” Trolls only have so many arguments and would soon run out.

But this introduces a point of vulnerability as well. If anybody can come along and tag any post “Obama is a sekret Muslim” then it would be very easy to get legitimate arguments filtered. That means there would have to be a metamod system on the argument tags, which would feed back into credibility calculations. I’ll have to think about it, but it seems like this might be a case of too much overhead for users to think about.

The actual idea of having multiple categories of rating is useful though, and ties into the idea of categorizing errors from Corrigo. If used responsibly it can add a lot to the system. I still wonder how many users are going to use it, responsibly or not, since it requires additional analysis. On the other hand, deeper thinking is what Re:Poste is after.

http://neildawson.org/blog/

The News Tree

News Tree

Powered by user-submitted URLs and a data scraper which follows blog trackbacks, hashtags and other digital trails, this visualisation tool uses a tree metaphor to map not just the coverage of a news item but the development of user discussion around it. The scraper also gathers data from up and down-voting systems on various websites to rate and display the perceived value of each comment.

For example; the earliest articles and mentions build up the ‘roots’ of the tree, the most active and saturated conversation hubs flourish into ‘thickets’, the thickest branches are the established news outlets and the thinnest are freshly started blogs, each leaf represents a single comment which will fade over time unless referenced or replied to and leaves representing comments which are downrated by many other readers will appear diseased.

However, more than simply a pretty visualisation, the user can take advantage of these cues to quickly acquaint themselves with the fundamentals of the story by locating and reading the original reportage, the most helpful or controversial comments and the most active conversations, which they are then informed enough to take part in. They can also curate their own experience, filtering out (or indeed seeking out) ‘trolling’ or other undesirable conversation branches.

I had a version of this built into the previous iteration of Re:Poste where it merged articles with identical content to create true cross-site comment threads for syndicated stories. That was a much simpler system than The News Tree proposes, only aggregating content on target sites instead of reactions across different platforms. I very much like this idea as a visualization on top of Re:Poste’s comments and metrics though. It adds something I’ve regretted not putting in for quite a while, the ability to track an event independent of individual stories. If it were applied to track multiple articles instead of just reactions then it could provide an ongoing timeline of the development of the story. That’s something that would be tremendously useful for getting a high-level view of events as they occur. If it were applied to just a single article with Re:Poste comments as the leaves it would also be a cool viz tool or alternate interface for the data.

http://blog.k-zhu.com/

Roundtable
Roundtable

Roundtable is a platform for engaging readers, journalists and experts around salient news topics. Inspired by Oxford-style debate, the app allows newsrooms to crowd-source news analysis and engage readers by inviting them to table to participate in the debate.

I have to say, I absolutely love the idea of rigorous debate on news articles. I also have to say that I have absolutely no faith in a general user community to actually engage in a rigorous debate on news articles. If this is to be applied to a gated, verified, externally-credible community and then overlaid atop an article as an education tool then it would be great, provided one could come up with such a community. I don’t think that’s Katie’s goal with this project, but it would be mine.

Re:Poste has always been designed as an asynchronous application. But what if there was also a synchronous component? Schedule online discussions between high-level participants that could be run as a traditional debate, but with input from the larger community. Then, after the debate, it could be attached to Re:Poste comment streams on articles about the same event or subject. This would be an orthogonal discussion to the comment stream but would inform it and, if attached to multiple stories, could really add value to the comments as a more involved debate about the issues at hand. Tie it into the News Tree variation and it becomes a persistent means of informing an ongoing discussion at a very high level.

http://hypothesiz.tumblr.com/

Hypothes.is

An open-source, distributed platform to enable sentence-level, community moderated annotation of news, blogs, legislation, scientific articles, PDFs, books, video, etc. without the consent of the target. Crowdsourced peer-review for the Web.

The simple expression of MVP I’m working on is thus:

  • Pointing into text and addressing it (ala Awesome Highlighter)
  • Attaching an annotation to that location
  • Associating a formalized stance or sentiment to that annotation (to allow aggregated measurements of those stances)
  • Moderating the annotations via a good meta-moderation model (think slashdot, but with a skew towards domains moderators are known to be knowledgeable about)

Also, we’ll need a good leaderboard / zeitgeist page to show what’s happening, and possibly also snuck in the MVP is a javascript widget that would allow rollover sneak previews or “peeks” of annotations on pages that point to them—kind of like active pop-up footnotes.

We will employ several features we believe are critical to a successful outcome: • Inline annotation—Specificity is key. Especially in a lengthy article, locating critique at a sentence or paragraph level is important to centering the dialog around that passage. • Indication of stance—Does the critique support or challenge the associated text, what is the specific relationship of the annotation to its target? • Powerful references—Properly enabling the use of references and citations is key, allowing their reputation to be inherited is an important aspect. • Collaboration—Users will be able to suggest improvements and modifications to others’ contributions.

Most important perhaps is the way reputation is handled. Providing the proper checks and incentives which will encourage quality contribution and discourage trolls and the uninformed is all important. Hypothes.is will employ randomized moderation and meta-moderation that favors moderators with similar or adjacent domain expertise, as inferred from their previous contributions.

We imagine an objective metric for articles that is a composite of the accumulated critique they receive (itself dependent on the reputation and domain expertise of those providing the critique) together with several automated measurements, such as the number of other credible articles that cite them (citation rank), their social rank, the degree to which they motivate the following of the author and the number of facts and details that they include.

Again, this sounds very similar to Re:Poste, and indeed some of the predecessors they cite are the same ones I looked at in developing Re:Poste years ago. Though there are a lot of goals floating around for Hypothes.is and claims that it will be different than all the precursors that failed, I haven’t seen much about how those goals will be accomplished yet so there’s not much I can say about implementation specifics at this point. A couple of interesting features:

- Powerful references: I have no referencing system in Re:Poste. This was an intentional decision because my goal was to make commenting as quick and painless as possible, and adding references tends to be neither. My thinking has changed somewhat though, based largely on Wikipedia. That is a model that absolutely requires sources…but only after the fact. I think Re:Poste could work the same way by offering the chance to put in a reference but not requiring it. Of course, that also means there has to be a [citation needed] button, but I’m thinking that is relatively simple. Maybe if there was a reference organization system that was global to the thread, much like MS Word’s reference panel, and contributors could just pull out the applicable links…hmm.

- Collaboration: Another interesting idea that I’m unsure about from a UX perspective. It sounds like it would turn into wikiComment, which is nice in theory but I would be very hesitant to touch. If Hypothes.is has the right interface it might work, but I haven’t seen an interface put forward for it yet.

 

If you read through all of these ideas there is the potential for a very compelling synthesis application (or convergence of apps via APIs).  I’ll try to put the pieces together for my final proposal next week. This is why open source development works.

JUL
26
2011
Irish Stew in a bread bowl

Flickr image by Cloned Milkmen

GhostCoder is an audio steganography tool intent on using file sharing as a massive backup system, squirreling away important public domain data in music files.  I eagerly await somebody stuffing DeCSS into Purple Rain.

Mozilla is working on an Open Badge system to encourage self-directed online education.  I’ll have much more on this in a later post since it intersects with some ideas we’re playing with for the new ICD program at UMaine.

Google+ doesn’t allow pseudonyms.  Of course, neither does Facebook, but enforcement is, uh, lax.  Come on, even the federal government realized that this is a bad idea.

Edd Dumbill thinks Google+ is going to be a social backbone.  What could possibly go wrong?

Chosen is a javsacript plug-in that makes long, unwieldy select boxes much more user-friendly. It is currently available in both jQuery and Prototype flavors.  Says so right on the label.

It appears that we don’t remember things that we know are written down.  And on the Internet, everything is written down.

3D glasses are painful and damage your eyes.  I’ve already given up going to any movie that’s released only in 3D, now I have a medical justification to go with my aesthetic one.

Teleworkers are happier than people chained to a desk.  For some reason, this study appeared in the Journal of Applied Communication Research instead of the Journal of the Blindingly Obvious.

News flash:  pirates buy a lot of media.  The (roughly) 832nd report confirming this came out recently.

MTV thinks it knows how to market to the Millennial generation online.  Presumably, no actual music is involved.

Disk+Mouse

Just in time to catch the disappearance of optical drives from laptops, Yanko Design has a concept for a mouse that turns into a CD.

Lion trick-of-the-moment: Use a web app without a browser.  There have been many, many other attempts at this…we’ll see.

Ars has a truly massive review of Lion up.  And since there is now a way to install it without an App store download, I might actually consider upgrading.

Yes, you can do games in HTML5 and Canvas.  Rob Hawkes will get you going.

Closed caption television -> Arduino -> Processing -> Word cloud.  This was inevitable, really.

Is Ashton Kutcher the Neo of the Twitter Matrix?

You can now hook your iDevice to your Arduino.  Or at least you could have, if you ordered fast enough.

JUL
25
2011
HTML5 & Friends

HTML5 slide from Chris Heilmann's lecture

 

This week’s MoJo featured more great material from Chris Heilmann, John Resig, and Jesse James Garrett that mostly focused on building a user experience.  Of course, when thinking about open projects “user” takes on an expanded meaning and, as Resig stressed, every user is a potential future collaborator.  In a previous post I talked about the old design of Re:Poste that featured some of the ideas discussed last week, but compared to the sites we saw presented this week, really shows its age both conceptually and technologically.  This week I’m doing some v2.0 design prototypes that take advantage of some of newer possibilities.

To start, I’m throwing out the pseudo-window that contained the old design.  It’s no longer necessary: it was intended to create context, but with the development of microformats and some new CSS3 tricks we can do much better.  The application Readability demonstrates this to great effect by transforming content from its original look and feel to a beautiful, text-centric look:

Readability, applied to a Fox News article

Readability, applied to a Fox News article

 

This is a perfect technique for an application like Re:Poste.  Instead of trying to fit everything into a little window that loads on top of the article and all its clutter, I can free the text and overlay my own interface that emphasizes just the content and the commentary.  Even better, I already know it works despite the fact that Fox News doesn’t actually use the microformats Readability asks for.  They appear to just be parsing the site’s div ids and classes, which means that I can do the same thing on a variety of sites (though it’s more labor intensive than microformats).

Now that the text is isolated I can start adding my data and metadata.  My first attempt looked like this:

Re:Poste 2, sketch 1

First sketch of a modern Re:Poste interface (click to enlarge)

 

This interface is starting to integrate the article with the commentary and recognize specific segments of the source content.  The strips along the left are color-coded data about the text to the right of them, the far left column characterizing the entire article, the second column is the paragraph level, and the third is line by line.  Click on any signifier and the pertinent section is highlighted, carrying over to the right where comments about that section are displayed.  In addition to the text, comments show the author’s rating of the highlighted section and their claim of expertise (the CFL icon, which is pretty horrendous).  While this version adds granularity and integrates with the article better than the original, there are obviously still some issues.

Re:Poste 2, sketch 2

Second Re:Poste 2 interface sketch (click to enlarge)

 

Sketch 2 changes the comment column to show all four rating options, with the vertical size proportional to the weight of comments in that category (this is a weighted trust metric that includes reputation and metaratings).  There are fewer icons to display data, but the data is used to adjust the order of comments.  I would still like to expose the raw data, but as long as it is represented I think the actual numbers can be hidden behind a button on the chrome.

JUL
18
2011
Irish Stew in a bread bowl

Flickr image by Cloned Milkmen

Some recent links of interest:

NewScientist reports on a study about what messages get retweeted.  Apparently it’s bad to use negative words, but good to use negative emoticons.  I’m sure there’s a comment on empathy in there somewhere.

Poynter has some suggestions on increasing the stickiness of transient news hits.

The .xxx domain rollout is hitting some rough spots?  Shocking.  I eagerly await the flood of $185k tld spam.

Mozilla is working on a BrowserID federated login system (though it has some issues).  Curious to see how this fits with the NSTIC plan.

Infographics are nice, but field notes have been beautiful for a very long time now.

Astroturfing is bad, m’kay?  Though, does it count as astroturfing if you own a newspaper?

Clay Shirky wants to avoid monoculture in the next phase of journalism, whatever that may be.

What if everybody who tweeted something that’s patently false got an instant reply telling them as much?

Google News badges:  Like FourSquare, but for the web.  For people who were afraid that Google didn’t have enough ways to track their clickstream, I guess.

CCP’s attempt to make more real money might devalue their fake money, causing the EVE community to revolt over $60 monocles.

Some tech changes cause pushback, while others are quickly accepted.  What’s the difference between them?

The Kinect is getting hacked for serious work.  I can attest to this, as I’ve seen it in person.

A computer finally learns to play CivII effectively, unlike the original AI.  It just had to RTFM.

Yes, you can do Isometric Text in CSS3.  You can even edit it live.  Javascript PaperBoy in 3…2…

JUL
17
2011
Black Belt Map

Other than appearing in one of the lectures, this image has nothing to do with the post below. It is included because it is completely awesome. Read why in this article.

 

This week saw the launch of #MozNewsLab, the next stage of this year’s MoJo innovation challenges.  I’m taking part in this because I submitted an older project that I thought had potential to improve intellectual integrity in the media: Re:Poste, which I originally started work on in 2005 but dropped before it could go anywhere due to changes in the web platform that made it impractical.  Basically I was using techniques that were very similar to XSS, and there has been a big push to close those security holes over the last few years.

The problem that Re:Poste was trying to solve has only gotten worse in the intervening years though, and–technological limitations aside–I still believe that the core ideas are useful.  This week Aza Raskin, Burt Herman, and Amanda Cox gave us some great thoughts that spoke to the early stages of launching a project.  What I found interesting is that the one person who didn’t expressly talk about launching a project, Amanda Cox, is the one that I think gave me the most insight into how Re:Poste could be more successful.

As documented here, I went through many of the rapid prototyping and development stages that Aza and Bert discussed in their talks during Re:Poste’s original design phase.  This is the basic interface that I prototyped:

You could get more information by rolling over the icons in the left hand column, but this was pretty much it for the end-user interface (contributors had a few extra buttons).

There are a few basic data visualization aspects here.  The background color of the post shows community approval of that post, while the color of the weeble on the left shows what level of expertise the poster has claimed on the article.  But the interface is intentionally limited: there are no threads (in fact, discussion is not possible at all on Re:Poste because it is intended to focus on responses to the article, not other commenters, so the system only allows one post per person on an article), no options to reorder posts, no names or even account names shown.

There also isn’t any hint of why the data that is shown is meaningful.  In trying to strip down the interface and make the system as clean and neutral as possible, I removed the ability for commenters to make any kind of compelling case for their comment.  I went to a lot of trouble to make a Javascript interface that could load on top of any page on the web in an effort to maintain context, but just putting the the window on the page isn’t sufficient.  It needs to allow interaction with the underlying page, at least give the option to use granular tools to mark up a story, and show what pieces are good or bad.  This interface only tells users a story if they read the red and green comments; instead, it should show the community’s opinion of the story at a glance and include text as more detail.

JUL
07
2011

I normally don’t care that iPads don’t vibrate.  However, for a study at the VEMI Lab we’ve been looking at testing different types of vibration as a haptic interface for low vision users of mobile devices.  Unfortunately, most mobile devices that have vibration built in are phones, and phones have small screens–not ideal for some of the experiments we want to run.  The only larger mobile touchpad we have on hand is an iPad, but as I said…they can’t vibrate, at least not without some help.

Putting a motor on the outside of the case wasn’t very productive, either on the aluminum back or the glass: the little motor from Radio Shack just isn’t strong enough to provide a useful sensation on the outside.  So, the help has to be internal, which means breaking into the iPad.  As documented here, this isn’t for the faint of heart (though it is easier than it looks).

While putting a motor in is nice, we also need it to be driven by activity on the iPad.  There are a few ways to do this, but I took a bit of a Gordian Knot path and decided to just run the motor directly off an audio signal.  The iPad obviously has a headphone jack on it, but since we’re in a lab environment I also wanted to be able to power the motor off external audio sources for a bit more flexibility.  That led to this:

iPad with audio jack

 

Yes, we drilled a hole in the back of an iPad and stuck an ugly wire through it.  Yes, we’re going to Apple Hell.  But it works: the iPad now vibrates stronger than most phones, over its entire screen.

A few build notes:

Attaching the motor to the iPad’s case was a bit of a pain.  In the end it took some liquid weld epoxy to hold it in place.  In addition, it’s pretty cramped in there, so there wasn’t room to add a second motor as I had planned.  (Hey, audio is stereo, might as well run two motors, right?  Ah well.)

Inside mounting of the motor.

 

Breaking into the iPad is as annoying as advertised and I broke off most of the pins that hold the glass in place on one side.  Luckily, once it’s back in it’s case you’d never know I opened it.  A quick cut in the case protects and hides the wire coming out of the iPad.

Back of the iPad in it's case, with just the jack extension showing

 

The overall system is a bit more complicated than just running from the headphone jack to the new input.  Not only is there not enough power to run the motor, but it needs to be rectified.  Again, since this is a lab situation, I can afford to be a bit bulky with my solutions:

The complete setup, with amp and rectifier

 

The silver and black box is a headphone amp, which boosts the signal enough to turn the motor (and gives me the ability to adjust the vibration strength just by changing the volume).  Between the amp output and the iPad is a project box that just takes one audio jack, runs both channels through a pair of rectifier diodes, and sends it back to an output audio jack.  I probably could/should have thrown in a smoothing capacitor, but it wasn’t necessary.