Observing how we use multiple devices and interfaces

I’m enjoying my new Windows desktop replacement computer, which happens to be a Lenovo Yoga Pro 2. Yes, that’s right, an ultrabook. I have added some enhancements to it such as an external optical drive, second monitor and a USB ethernet port in addition to a wireless keyboard and mouse, but aside from these things, this is a plenty powerful enough machine. I need something fast, roomy and robust enough to allow me to develop some multimedia content and light and mobile enough for me to carry it around to classrooms and workshop spaces. It’s my first new desktop replacement in 4 1/2 years – the noticeable differences include the lack of any need to install much new software, and the new capabilities that include gesture based computing (I’m not really using that yet but it’s included), face recognition security, and the touch screen.

What this means is that I am interacting physically with a set of smaller computers at a standing desk, not only with my mouse and keyboard, but with my gestures and my touch. I find I naturally migrate things I do on my Samsung Galaxy S4 to the computer screen to my various tablets and television monitors. Now we have a new user dimension: A more complete physical, emmersive experience that depends less on my hardware and more on my bandwidth. It’s far more shared and social, and ever less reliant on place or OS. I see a natural shift now to wearable technology next, where subtle, natural cues such as temperature, color and lighting will enhance how I interact with the computing world.

The next challenge for me, personally, is to clearly differentiate and organize all my content across all these access modes. Will we develop a better shared understanding for organizing all this shape-shifting content? The traditional file directory structure seems less relevant now that we can rely more on search and pattern recognition – but that’s on-demand and constantly shifting, so collective understanding seems both more attainable and less concrete at the same time.

NWACC 2013

NWACC is suffering from a loss of features this year again, as it did last year. We didn’t even have a keynote speaker this time around, but instead tried to go for the unconference approach (which I find sometimes works, but often winds up chaotic and messy). A brew pub is not my idea of a good dinner spot. And, Portland Place Hotel is downright dreary. Take us back to the White Stag Building, please!

Criticisms aside, we had a great group at this meeting. We tried to identify common problems and then come together to seek resolutions for those problems, which we do every year. We also reaffirm that we are experiencing the same problems – which is a bit amazing, we are indeed having the same problems.

We identified what I would say are our top three problems, which are:

1. The fact that higher level admins don’t understand the field of Ed tech, so they can’t evaluate faculty needs or performance, and they can’t plan effectively for the campus. I bet every campus ed tech person can name instances where the blind have tried to lead the blind, sometimes with disastrous results.

2. Faculty need incentives and motivation outside of the P&T structure (which we would also like, see #1). What motivation could we provide, and how?

3. Faculty enjoy learning from their peers. How can we facilitate a peer learning network for faculty?

None of these are particularly new, but each year our answers become more nuanced and sophisticated. We have formed a Google community and I hope we can continue to grow and share what we’ve learned there. As a rather senior person in this field, I didn’t learn much, but I did take away one good tip. Many a time I offer a workshop for which I know there is interest, just not a lot at the moment I offer that workshop. So, instead, we can offer workshops once a critical mass has been achieved. Let’s say we will provide a sign up sheet for those who want Workshop X – only once that sheet has 10 signatures will we offer the workshop, at the time the 10 signers prefer most.

Also, we think social curator tools like Learnist or Storify might be a more effective way to get the message out – I’ll explore these tools more deeply.

Zombification of education?

Serena Golden interviews the authors of “Zombies in the Academy”, found at http://press.uchicago.edu/ucp/books/book/distributed/Z/bo15566853.html.

Some of the things the authors report have been true for years:

1. Student evaluations are not at all useful means of evaluating teaching or course content, yet we cling to them.

2. Students are expected to go to college to learn, but they quickly learn to seek good grades instead.

3. Faculty are expected to teach well, but they quickly learn to seek P&T rewards instead (usually in the form of grants and publications).

4. Post-secondary education lures students with promises that a degree increases student employ-ability, but in reality offers a high student debt paired with a set of learning outcomes that are too often unrelated to skills needed in the workforce, or to jobs that the labor force doesn’t really have. For e.g. – law degrees. There are many more newly minted lawyers out there than there are law jobs.

See an excerpt below:

Q: You write, “I take zombification here to refer to those processes within the university … which, in instrumentalizing action (teaching, research) in the service of pseudo-market principles, decapitate the real ends of that action, while reconstituting the means as a kind of spectral presence of themselves.” Can you give some examples of what you mean by this?

Whelan: Where to start?

Success in teaching is usually indicated by forms filled out by students. It is a kind of popularity contest. It collapses the future in that students are asked about the course they are taking right now, how the teaching is in that. We never ask them the following year or three years later or at any other time what they remember of that course or of the teaching in it, because, evidently, we don’t care. It is hard to understand how bribing this semester’s cohort into up-voting us on a form in a few weeks’ time safeguards the quality of their education. Perhaps this is supposed to make sense because they are being treated like customers, buying something a bit like a burger. How’s that burger you are eating right now? Quality control and customer satisfaction are synonymous. But of course, educations are not like burgers and more or less by definition it is not easy for the “customer” to judge or understand what a good education is.

Research is much the same: success is judged by your capacity to fill out forms asking for money to do research (in Australia, the only money that matters is that from the Australian Research Council). It helps if you have “outputs” in the form of previous publications around which metrics like the H-index can be generated to quantify how useful you are (in a disconcertingly short run). You should therefore publish whether you have anything to say or not. Economists have been able to demonstrate that these funding processes are ridiculously wasteful in terms of both labor hours and the allocation and dispensing of the funds: it’s basically a lottery, where the only guarantee is that most players will be losers and will be treated accordingly.

An interesting consequence of this system is that effectively the capacity to spend money is rewarded with more money. There is no discourse of frugality or sustainability. Far better to maximally inflate the cost of whatever it is you do so as to ask for the most money possible: profligacy is here the sign of quality.

Insofar as there is any logic to this at all, it appears to reside in a cultish faith in bureaucratic “transparency,” the idea that whatever supernatural weirdness research and teaching involve; they can be made visible, explicable, and rankable through forms.

Alongside this bureaucratic absurdity – which has fantastic structural implications in terms of its costs and how it has reorganized the social field and the day-to-day practice of everyday life inside the institution – there is the inane idea that having everyone compete with everyone else at every level, from nation down to individual, somehow guarantees efficiency. If we look at the other social problems we face, a case can be made that it is time to consider moving beyond this dogma and using our imaginations. We could start by imagining that, by and large, people can be trusted to do the job they are paid to do. Then we could start thinking about the work that gets done that nobody gets paid to do, and whether we value this work or not.

Read more: http://www.insidehighered.com/news/2013/08/12/new-book-examines-higher-education-through-lens-zombie-apocalypse#ixzz2bsXoJQSv
Inside Higher Ed

Chromecast initial report

I got my Chromecast ($35) last week, super cheap for a device like this. It comes with 3 months free Netflix access, and since I don’t already have Netflix, it almost pays for itself. In terms of functionality, it’s easy to set up, but you do have to have a Google account and use Chrome with the Chromecast app installed. You also need either a free USB port for power or you need to use a power outlet, unlike Apple TV. Luckily, my smart tv has both a spare HDMI and USB port available. I found it streams content from my laptop reasonably well, with a slight lag. Apple TV is amazingly clear and lag-free on Airplay. For Windows users who love Roku 3 for streaming, Chromecast is probably of little use right now unless you really value being able to stream whatever Chrome can play on your device. Apple TV is limited compared to Roku 3 and doesn’t even stream from iPads consistently via Airplay, plus it only works on iOS devices. I’m expecting Chromecast to improve, however. It will need more channels and perhaps some improvements in performance via firmware updates before I vote this is a killer device. I have more testing to do, but right now, for home users, I think Roku 3 is the device to own and for we education users on campus, Apple TV for its Airplay feature.

The importance of voice in narrative

Like many multimedia developers, I believe that the most important part of a video is the audio. For many novice video makers, audio is more challenging than video alone, because it’s not as simple to trim, position and normalize audio clips and because it’s not always immediately evident how jarring mismatched or indecipherable sound can feel to a listener. Here’s a great little site that offers some tips for developing helpful audio for learning: http://www.sweetrush.com/press-play-5-tips-for-writing-audio-scripts/

I’m also a long-time member of Audible.com, and I’ve had to learn to check the quality of the narration before I buy any audiobook. I love “Neuromancer” and other books by William Gibson, but when he narrates his own work he sounds oddly robotic and disconnected from the meaning of the words. No word or phrase is differentiated from any other word; an exclamation of fright is described in the same rhythmic pace and range as is a description of a keyboard. In this respect, actors really prove their value – Meryl Streep could narrate a phone book and make it sound wonderfully interesting by comparison.

Now, we have Amazon ACX self-publishing tools for independent authors who want to create their own audiobooks. ACX helps connects writers to voice actors, which is cool, but it also allows writers to narrate their own works. Will scientists, professors and other authors choose to do so? If so, what impact might this have on learning?

Proving value in higher education

This morning, Oregon State tweeted that it made #49 on the list of best ROI over 30 years for online tuition costs. That higher education can improve one’s marketability is no secret, but proof that online education can do the same is still news.

In D.C., Elizabeth Warren decries the state of student loan interest rates: http://www.youtube.com/watch?v=zTisqNKEHrU

Meantime, new models for affordable higher education are emerging. Poor students in Louisiana are recieving coaching via mobile phones that help them succeed in community college nursing classes, even if they come from families too poor to afford personal computers. Udacity can provide a Master’s in computer science for $7,000 via Georgia Tech. University Now, whose founder Gene Wade was the keynote at UBTech this year, offers a debt-free, pay-as-you-go model for a program that is focused on outcomes, is self-paced, offers personalized coaching, tightly aligned competencies and disaggregated grading (all very appealing to students and employers alike). Earlier this week, the State of Oregon announced several new big ideas for post-secondary education, including an innovative “pay it forward” model for higher education that was born of a PSU class. This idea would waive tuition for community colleges and universities and ask students to pay back this tuition over a period of 20 years at a rate that varies depending on their earnings. The idea is moving forward via the Higher Ed Coordinating Commission for consideration by the Legislature by 2015 (according to Oregon Live).

Of course, all these marvelous ideas are describing a cost-benefit analysis based on earnings alone – we haven’t touched on the intrinsic value of education as a merit in its own right. These programs may or may not provide a holistic educational experience that is equivalent to the small liberal arts experience, but let’s get real: Relatively few of us can afford the traditional model. My family was middle-class, but I had four siblings and we all went to college, so money was tight. A small, private liberal arts college was never even mentioned as an option for us; it was considered out of reach. I put myself through part of my undergrad and all of my graduate courses, so I have a keen appreciation for time and money when it comes to education. I constantly ask myself if I were a student today, what path I would choose. I know for sure it would be online, because I would be working full-time. I’d look for affordability and try to gauge my earning potential to help me choose a degree, and I would consider the reputation of the school strongly. So, though I would be very, very tempted by U. Now, I might choose a specific program like Udacity’s computer science degree from Georgia Tech. Other specifics would emerge to shape my decision so this is just a gut feeling, but I doubt I would choose the small liberal arts model today. I simply couldn’t afford the time or money, and wouldn’t want the debt. Maybe small liberal arts schools don’t care so much about that – they are fine remaining small, boutique bastions of tradition – but I believe this is a shrinking market in an ever-more tech savvy world. Further, the small liberal arts model is not geared to life-long learners. Why wouldn’t I prefer a school that might be able to let me update my skills for life? How about it, U. Now? Do you see that component added to your curriculum in your future? Small liberal arts schools, would you be willing to update to accomodate your alumnai for continuing education?

https://coggle.it/diagram/51d495881f719b86400010fa/36d0b338e1ba9af0b09bd9414e8bcdc61730bc2ab36184214972c62bf8530de2

Buh bye, cable. Roku 3 now streams PBS content.

I’ve been growing tired of cable tv for a long time now, but I wasn’t sure I was willing to give up my DVR, channel-surfing, Lazyboy ways until recently. The reasons:

1. Roku 3 – it’s getting terrific reviews and as announced yesterday, now supports PBS so I can get my Masterpiece fix
2. Cable TV news is now virtually all commercials, bad reality tv or repetitive, sensationalist programming like the Arias trial
3. I still have to pay for an baffling array of channels that I do not have any interest in whatever
4. Verizon sold my contract to Frontier, which is the beyond horrible for customer support and I don’t want to give them one more cent more than I have to
5. Let’s face it, tv is not exactly the best use of my precious time anyway

So as soon as I can get my Roku 3 delivered and installed, I’m calling to cut the cable cord and be free at last. Dumping my land line for cell only was a great decision, and this feels like one too.

Next up: To find a cell program that lets me buy the right amount of voice and data services we actually use. I pay every month for service I never use in a big-bucket plan; it’s a rip-off.

Wait, wait, don’t buy that new computer

According to this: http://gizmodo.com/now-is-a-horrible-time-to-buy-a-laptop-496028699 we should all wait until the newest Intel Haswell processor-enabled computers come out, presumably in June. If you are wondering what a Haswell chip is, it’s supposed to be a much faster, much more energy-efficient, and is supposed to be One Chip To Rule Them All (meaning it can be used in all your devices). In practical terms, your average user might run your new laptop all day long on a single charge, and find that all your multimedia displays much more quickly and perhaps without the need for a discrete graphics card. Now, that’s pretty cool – I’d definitely perk up and pay attention for an all-day battery life device.

Bragging about Rob Gardner’s blogs

Edublogs is a pretty cool blogging tool, no doubt about it. I have been a fan since I saw a demo at an EDUCAUSE conference years ago, back when it was still grant-supported. I told one of our professors, Rob Gardner, about it a couple of years ago and he did some really amazing things with this platform in his courses. Now, I think he deserves some well-earned praise. I particularly like this lovely site, which Rob created but for which he gives his students credit for the content:

http://environmentalsoc.edublogs.org/ground-rules-safe-spaces/

Authentic – what exactly does it mean, and is it respected?

One of the chief characteristics of today’s student is that he or she seeks authentic experiences in learning. Educators are most effective when they put the learner at the center of the learning experience. I’m a fan of keeping it real, as they say, but does academia respect that authenticity very much?

We in higher education tend to hold each other’s feet to the fire in terms of using evidence to support our various claims. Toward that end, we try to eschew personal opinions in favor of facts – and thus we tend not to value reports or studies that seem to be at all personal in nature. This opinion avoidance gets taken to ridiculous extremes at times. After all, requiring authors to write in the third person may seem to indicate some level of impersonal observance on the part of the researcher, but that doesn’t actually stop authors from producing biased reports. Meantime, we are forced to read and write incredibly awkward pieces in which we express things like, “The researcher determined that …..” instead of “I saw that…” It’s as though we try to pretend that we are robots floating in space, looking down on some event on the surface of planet Earth that we are only remotely, passively interested in. No wonder academic is so unreadably dull! It may be real, but it sounds inauthentic.

Why can’t we speak as the human beings that we are? Why do we try to pretend we are free of bias when we know we are not? We can admit that we hope to find a particular outcome while still being truthful about the outcome we actually get, can we not? We should be able to test our instruments on a variety of people to ensure clarity, regardless of whether we admit to our biases.