10.04.2014 Views

WebCenter User Experience and Interaction From iPads to Xbox

WebCenter User Experience and Interaction From iPads to Xbox

WebCenter User Experience and Interaction From iPads to Xbox

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>WebCenter</strong> <strong>User</strong> <strong>Experience</strong> <strong>and</strong> <strong>Interaction</strong><br />

<strong>From</strong> <strong>iPads</strong> <strong>to</strong> <strong>Xbox</strong><br />

John Sim, Fishbowl Solutions<br />

@JRSim_UIX<br />

Introduction<br />

In recent years <strong>User</strong> <strong>Experience</strong> (UX) has become increasingly important. Organizations can no longer get away<br />

with ignoring the ways users interact with applications both inside <strong>and</strong> outside the office. By investing time in<br />

pro<strong>to</strong>typing <strong>and</strong> usability testing, organizations can help their users be as productive as possible. As we look <strong>to</strong> the<br />

future the traditional mouse <strong>and</strong> keyboard interaction model is simply not enough. Market innova<strong>to</strong>rs are<br />

developing new methods <strong>to</strong> enhance applications with technology like Voice, Touch <strong>and</strong> Motion Gestures that enable<br />

the user <strong>to</strong> interact with <strong>and</strong> locate the information they need quickly <strong>and</strong> easily in new <strong>and</strong> interesting ways.<br />

There has already been an explosion of new requirements <strong>and</strong> support for the latest <strong>to</strong>uch devices; we see evidence<br />

of this even in Oracle <strong>WebCenter</strong> PS5. Recognized st<strong>and</strong>ards are coming in<strong>to</strong> play more <strong>and</strong> more often as users<br />

become increasingly familiar with <strong>to</strong>uch devices <strong>and</strong> technologies like the <strong>Xbox</strong> Kinect. These technologies allow<br />

users <strong>to</strong> interact with innovative UIs like the Windows Metro interface used within the Windows 8 operating system,<br />

the <strong>Xbox</strong> gaming platform <strong>and</strong> Windows Mobile.<br />

The Metro UI<br />

This whitepaper will showcase how we can now support multiple devices <strong>and</strong> use new input methods <strong>to</strong> interact <strong>and</strong><br />

enhance <strong>WebCenter</strong>’s capabilities. By using cus<strong>to</strong>m implemented features <strong>to</strong> provide Touch, Motion, <strong>and</strong> voice<br />

interactions within a web browser, users can escape from the keyboard while exploring new approaches <strong>to</strong><br />

interacting with information.<br />

Cross-Device Interface Support: Desk<strong>to</strong>p <strong>to</strong> Tablet <strong>to</strong> Mobile<br />

Whether you are designing for a desk<strong>to</strong>p display, or a Blackberry, or the latest iPad3, supporting multiple devices<br />

within one interface can be a challenge. This type of interface design is now widely referred <strong>to</strong> as responsive or<br />

adaptive design—an interface design where the user’s environment, screen resolution, platform <strong>and</strong> orientation are<br />

all supported by one template. Clients have been asking for this support for years <strong>and</strong>, although it’s not easy, it has<br />

been made easier with CSS3.<br />

© 2012. Fishbowl Solutions, Inc.


Fluid grids (liquid layout), flexible images, breakpoints,<br />

media queries <strong>and</strong> a <strong>to</strong>uch of JavaScript are now the key<br />

elements <strong>to</strong> getting started with responsive design.<br />

Fluid Grids / Liquid Layouts<br />

Fluid Grids provide the ability for a web site or<br />

application <strong>to</strong> scale its containing region elements based<br />

on the width of the browser viewport. This allows for<br />

elements <strong>to</strong> be flexible <strong>and</strong> reposition on the site <strong>to</strong><br />

support multiple resolutions ranging from desk<strong>to</strong>ps <strong>to</strong><br />

mobile devices.<br />

The new Smashing Magazine Website is a perfect<br />

example of a responsive site – on the right you can see<br />

an example of the site <strong>and</strong> it’s breakpoints as the<br />

browser is scaled down from 1900px wide <strong>to</strong> 614px.<br />

Initially all the navigation is vertical, positioned left of<br />

the content region. This provides a nice flow from site<br />

sections <strong>to</strong> section navigation elements. As we reduce<br />

the size of the site you will notice that the navigation<br />

model transforms <strong>to</strong> position these items in the header<br />

above the content; providing a constant fixed<br />

proportional width for the content information area of<br />

the site.<br />

As we scale further down you will notice the right h<strong>and</strong><br />

side of the site which previously contained<br />

SmashingMagazine.com Working Example Responsive Site.<br />

© 2012. Fishbowl Solutions, Inc.


Advertisements, Tags, <strong>and</strong> highlights is removed, <strong>and</strong><br />

the header now contains a large search field allowing<br />

users <strong>to</strong> filter <strong>and</strong> find required content easier whilst still<br />

having access <strong>and</strong> providing an equal amount of real-<br />

estate for the main content portion of the site which<br />

users would find more relevant.<br />

Flexible Images<br />

Generally felixble image use involves setting the image width <strong>to</strong> 100% <strong>and</strong> allowing the containing<br />

DOM element <strong>to</strong> manage the size of the image as it scales down or up. Another option is <strong>to</strong> use<br />

JavaScript <strong>to</strong> define which image <strong>to</strong> download based off the screen resolution. This is especially<br />

useful when implementing support for mobile users in areas where mobile internet net speed is limited.<br />

This option means that users are not forced <strong>to</strong> download large b<strong>and</strong>width-heavy heavy images.<br />

There are<br />

browsers like Amazon Silk <strong>and</strong> Opera Mini that will proxy <strong>and</strong> optimize the content delivered <strong>to</strong> the<br />

device. Some mobile providers even do this<br />

through their network,<br />

but do you want <strong>to</strong> risk having users who do not<br />

use those browsers or providers <strong>and</strong> thereby risk the responsiveness of your site appearing<br />

slow? <strong>WebCenter</strong><br />

Content can provide this optimization functionality out of the box <strong>and</strong> can create multiple files optimized from a<br />

source image <strong>and</strong> at different resolutions<br />

for you <strong>to</strong> pull content in<strong>to</strong> your website.<br />

A good example <strong>and</strong> source for Responsive-Images - https://github.com/filamentgroup/Responsive-Images<br />

For any img elements that offer a larger desk<strong>to</strong>p-friendly size, reference the larger image's source via a ?full=<br />

query string on the image url. Note that the path after ?full= should be written so that it works directly as the src<br />

attribute as well (in other words, include the same path info you need for the small version).<br />

<br />

Breakpoints<br />

As you work with your design in a fluid grid, you will find that your site just doesn’t look right in certain resolutions.<br />

A breakpoint is where you define how the template should be altered based on the resolution <strong>and</strong> orientation. This is<br />

generally done with CSS Media Queries although more complex designs will use JavaScript <strong>to</strong> h<strong>and</strong>le this.<br />

Example Breakpoints<br />

© 2012. Fishbowl Solutions, Inc.


These breakpoints will usually reflect the resolutions you would want <strong>to</strong> support – 240px, 320px, 480px, 640px,<br />

800px, 960px, 1280px+. This is not definitive, but servers as a useful guide. As devices change, graphics cards<br />

improve, <strong>and</strong> manufactures compete <strong>to</strong> have the highest resolution displays, web developers must continue <strong>to</strong><br />

evaluate the resolutions supported by their sites.<br />

If you’re not using the web developer <strong>to</strong>olbar plugin within Firefox then you can use http://responsivepx.com/ that<br />

allows you <strong>to</strong> define the dimensions <strong>and</strong> see how your site works within your predefined breakpoints.<br />

CSS3 Media Queries<br />

CSS media queries have been around for some time now. CSS2.1 allows developers <strong>to</strong> define stylesheets <strong>to</strong> be used<br />

for printing a page, WebTV, projec<strong>to</strong>r display <strong>and</strong> daily device use, but now with CSS3 we can define the styles that<br />

are used based on the following additional queries: max-width, min-width, device-width, orientation <strong>and</strong> color.<br />

In the past web designers have either had a single global stylesheet or have broken them in<strong>to</strong> multiple styles<br />

(layout, colours, fonts) <strong>to</strong> be reused <strong>and</strong> allow easy corporate br<strong>and</strong> management across their WebSite, Intranet,<br />

Extranet <strong>and</strong> Portal; but they have never really been widely used <strong>to</strong> support more complex multi-resolution capable<br />

devices.<br />

With CSS3 we can do this much more easily <strong>and</strong> can apply required transformations for breakpoints on the fly. For<br />

example, if I rotate the screen I can load in a style <strong>to</strong> reposition crucial site elements, <strong>and</strong> if I resize the screen I can<br />

load in another style <strong>to</strong> make the site adaptive.<br />

2 ways of using CSS Media queries:<br />

1. Inline within the style sheet you can encapsulate the required css within the query –<br />

@media only screen <strong>and</strong> (max-device-width: 480px) {<br />

body {<br />

}<br />

}<br />

background:red;<br />

2. Within the Media attribute of the html tag<br />

<br />

For browsers that do not support the CSS3 Media queries a javascript patch is available from here<br />

http://code.google.com/p/css3-mediaqueries-js/or https://github.com/scottjehl/Respond<br />

Voice Enabled Integration (Microphone)<br />

Speech input is one of the latest innovative browser technologies <strong>to</strong> appear.<br />

It’s easy <strong>to</strong> implement <strong>and</strong> there are several obvious uses:<br />

o<br />

o<br />

o<br />

o<br />

Assistive dictation for those with impaired mobility<br />

An alternative input option for mobile phones <strong>and</strong> tablets<br />

Provides support for an environment where a keyboard or mouse is impractical<br />

Enhance web site features like Search <strong>and</strong> Navigation.<br />

© 2012. Fishbowl Solutions, Inc.


Example <strong>Xbox</strong> Metro UI with Voice Enable Kinect Integration<br />

There are number of methods <strong>to</strong> enable browser voice integration.<br />

- here are some I have used:<br />

1. Google Chrome Native Support<br />

March of last year Google released the HTML speech input API within their Chrome browser giving developers the<br />

ability <strong>to</strong> transcribe voice <strong>to</strong> text from a webpage—a pro<strong>to</strong>type for the HTML Speech Incuba<strong>to</strong>r Group.<br />

Architecture Diagram<br />

© 2012. Fishbowl Solutions, Inc.


How does this work - Overview?<br />

HTML Source<br />

Speech test:<br />

Example of Enabled Speech Integration<br />

With the following attribute x-webkit-speech a microphone is displayed within the input field.<br />

1. The user selects the microphone <strong>and</strong> the “Speak now” model window is displayed; as you speak you will see<br />

the sound range highlight.<br />

2. After you s<strong>to</strong>p speaking the sound wave is passed <strong>to</strong> Google’s servers where speech recognition software is<br />

used <strong>to</strong> analyze your voice.<br />

3. An XML file is then passed back <strong>to</strong> the browser with the transcribed text <strong>and</strong> a few additional parameters<br />

such as how close the recognition match was.<br />

4. This text is then inserted in<strong>to</strong> the selected text field.<br />

The problem with this approach is that is currently only supported by the latest webkit browsers.<br />

1. Dragon Speak, Kinect Voice SDK<br />

If you need cross-browser device support for voice integration, you can now develop your own cus<strong>to</strong>m solution with<br />

a few limitations.<br />

Both Nuance <strong>and</strong> Microsoft can supply a voice recognition engine (Dragon Speak or Kinect Voice) that you can<br />

setup on your webserver. The challenge lies in getting the browser <strong>to</strong> capture the voice input. Here are a few<br />

options here for talking this:<br />

1.1 Flash<br />

Currently the only easily supported method is <strong>to</strong> use a Flash plugin as this plugin has access <strong>to</strong> the audio <strong>and</strong> video<br />

input device – an open source framework is available from http://speechapi.com. It communicates with its own voice<br />

engine, although you can configure it <strong>to</strong> point <strong>to</strong> your servers if desired.<br />

<br />

<br />

<br />

function onLoaded() {<br />

speechapi.setupRecognition(<br />

"SIMPLE",<br />

document.getElementById('words').value,<br />

false,<br />

false<br />

);<br />

}<br />

© 2012. Fishbowl Solutions, Inc.


var flashvars = {speechServer : "http://www.speechapi.com:8000/speechcloud"},<br />

params = {allowscriptaccess : "always"},<br />

attributes = {};<br />

attributes.id = "flashContent";<br />

swfobject.embedSWF(<br />

"http://www.speechapi.com/static/lib/speechapi-1.6.swf",<br />

"myAlternativeContent",<br />

"215", "138", "9.0.28",<br />

false,<br />

flashvars,<br />

params,<br />

attributes<br />

);<br />

speechapi.setup(<br />

"eli",<br />

"password",<br />

onResult,<br />

onFinishTTS,<br />

onLoaded,<br />

"flashContent"<br />

);<br />

function onResult(result) {<br />

document.getElementById('answer').innerHTML = result.text;<br />

speechapi.speak(result.text,"male");<br />

}<br />

function onFinishTTS() {<br />

//alert("finishTTS");<br />

}<br />

function ResetGrammar() {<br />

speechapi.setupRecognition(<br />

"SIMPLE",<br />

document.getElementById('words').value,<br />

false);<br />

}<br />

<br />


};<br />

audioStream = this.data;<br />

recordCtlBut.disabled = false;<br />

// in window.onload<br />

recordCtlBut = document.getElementById("record_ctl_but");<br />

recordCtlBut.onclick = function () {<br />

if (!recorder) {<br />

// start recording<br />

recordCtlBut.value = "S<strong>to</strong>p";<br />

recorder = audioStream.record();<br />

// set the maximum audio clip length <strong>to</strong> 10 seconds<br />

recordTimer = setTimeout(s<strong>to</strong>pRecording, 10000);<br />

} else<br />

s<strong>to</strong>pRecording();<br />

};<br />

function s<strong>to</strong>pRecording() {<br />

clearTimeout(recordTimer);<br />

var audioFile = recorder.s<strong>to</strong>p();<br />

useAudioFile(audioFile);<br />

// reset <strong>to</strong> allow new recording session<br />

recorder = null;<br />

recordCtlBut.value = "Record";<br />

}<br />

<br />

You can implement the above feature using the Web Real-Time Communication API that gives you early access <strong>to</strong><br />

Experimental Browser Features.<br />

https://labs.ericsson.com/developer-community/blog/beyond-html5-audio-capture-web-browsers<br />

1.1 Mobile Web App Frameworks<br />

Finally, if you are creating a mobile web application with a framework like PhoneGap-Callback you have access <strong>to</strong><br />

the capture method that will enable you <strong>to</strong> record <strong>and</strong> transmit audio or video without the requirement of Flash.<br />

The following is an example using phonegap 1.4.1 – The methods have changed since 1.0 which ADF Mobile currently<br />

resides on.<br />

http://docs.phonegap.com/en/1.4.1/phonegap_media_capture_capture.md.html#Capture<br />

<br />

<br />

<br />

// Called when capture operation is finished<br />

//<br />

function captureSuccess(mediaFiles) {<br />

var i, len;<br />

for (i = 0, len = mediaFiles.length; i < len; i += 1) {<br />

uploadFile(mediaFiles[i]);<br />

}<br />

© 2012. Fishbowl Solutions, Inc.


}<br />

// Called if something bad happens.<br />

//<br />

function captureError(error) {<br />

var msg = 'An error occurred during capture: ' + error.code;<br />

naviga<strong>to</strong>r.notification.alert(msg, null, 'Uh oh!');<br />

}<br />

// A but<strong>to</strong>n will call this function<br />

//<br />

function captureAudio() {<br />

// Launch device audio recording application,<br />

// allowing user <strong>to</strong> capture up <strong>to</strong> 2 audio clips<br />

naviga<strong>to</strong>r.device.capture.captureAudio(captureSuccess, captureError, {limit: 2});<br />

}<br />

// Upload files <strong>to</strong> server<br />

function uploadFile(mediaFile) {<br />

var ft = new FileTransfer(),<br />

path = mediaFile.fullPath,<br />

name = mediaFile.name;<br />

}<br />

//AJAX Upload <strong>to</strong> be replaced with Socket implimentation<br />

ft.upload(path,<br />

"URLToVoiceServer",<br />

function(result) {<br />

console.log('Upload success: ' + result.responseCode);<br />

console.log(result.bytesSent + ' bytes sent');<br />

},<br />

function(error) {<br />

console.log('Error uploading file ' + path + ': ' + error.code);<br />

},<br />

{ fileName: name });<br />

<br />

Capture Audio<br />

© 2012. Fishbowl Solutions, Inc.


Touch Events & Gestures: Integrating with Touch Screens<br />

Touch integrations with both websites <strong>and</strong> applications are now playing a major role in both the mobile <strong>and</strong> tablet<br />

world with <strong>to</strong>uch screen moni<strong>to</strong>rs <strong>and</strong> lap<strong>to</strong>ps just around the corner. We can already see that Microsoft has put in a<br />

great amount of effort with Windows 8 allowing its next generation OS <strong>to</strong> be fully <strong>to</strong>uch interactive <strong>and</strong><br />

incorporating Rich HTML5-driven applications. More importantly Oracle also is also recognizing this with its latest<br />

<strong>WebCenter</strong> PatchSet5 released in February.<br />

ADF DVT Touch Graph Solution<br />

Current Oracle PS5 ADF Touch Enhancements<br />

o<br />

Data Visualization (DVT)<br />

Graph <strong>and</strong> Gauge now support HTML5 output format, supporting <strong>to</strong>uch gestures for all the major<br />

interactivity features, such as selection, zoom <strong>and</strong> scroll, legend scrolling, time selec<strong>to</strong>r, data cursor <strong>and</strong><br />

magnify lens.<br />

A new web.xml context parameter oracle.adf.view.rich.dvt.DEFAULT_IMAGE_FORMAT was introduced <strong>to</strong><br />

change the default output format <strong>to</strong> HTML5. In addition, a new value for the imageFormat attribute<br />

imageFormat="HTML5" is now supported <strong>to</strong> allow for explicit usage.<br />

o<br />

Redistributing Touch Events for Tablets<br />

For tablet devices that support <strong>to</strong>uch screens <strong>and</strong> do not have a mouse, the browser simulates some mouse<br />

events, but not all. In order <strong>to</strong> achieve functional equivalency for these platforms, client components need <strong>to</strong><br />

be broadcast <strong>to</strong>uch events. A new <strong>to</strong>uch event object AdfComponentTouchEvent has been made available <strong>to</strong><br />

components when agents support single or multiple "<strong>to</strong>uchScreen" capabilities.<br />

Component peers can conditionally register for <strong>to</strong>uch event h<strong>and</strong>ling using the same mechanism.<br />

© 2012. Fishbowl Solutions, Inc.


ADF Example:<br />

AdfDhtmlPanelSplitterPeer.InitSubclass = function(){<br />

// Register event h<strong>and</strong>lers specific <strong>to</strong> <strong>to</strong>uch devices<br />

AdfRichUIPeer.addComponentEventH<strong>and</strong>lers(this,<br />

AdfComponentTouchEvent.TOUCH_START_TYPE,<br />

AdfComponentTouchEvent.TOUCH_END_TYPE,<br />

AdfComponentTouchEvent.TOUCH_MOVE_TYPE);<br />

};<br />

AdfDhtmlPanelSplitterPeer.pro<strong>to</strong>type.H<strong>and</strong>leComponentTouchMove<br />

= function(componentEvent) {...}<br />

AdfDhtmlPanelSplitterPeer.pro<strong>to</strong>type.H<strong>and</strong>leComponentTouchStart<br />

= function(componentEvent) {...}<br />

AdfDhtmlPanelSplitterPeer.pro<strong>to</strong>type.H<strong>and</strong>leComponentTouchEnd<br />

= function(componentEvent) {...}<br />

o<br />

o<br />

Simulating Context Menu <strong>and</strong> Tooltip Activation from Touch Gestures for Tablets<br />

Webkit on iOS <strong>and</strong> Android platforms does not raise contextMenu events. Under the desk<strong>to</strong>p platform the<br />

contextMenu event is derived from the right mouse click. The <strong>to</strong>oltip is also not supported by these<br />

platforms. For desk<strong>to</strong>p browsers the <strong>to</strong>oltip is shown on mouse over of elements that have a title attribute.<br />

Since these tablet devices do not fire contextMenu events or show <strong>to</strong>oltips, Oracle has added an<br />

enhancement <strong>to</strong> simulate this event from <strong>to</strong>uch gestures. The default gesture is tap+hold (500ms).<br />

However, this gesture is also used <strong>to</strong> active component drag-<strong>and</strong>-drop. To resolve this conflict in the cases<br />

where drag-<strong>and</strong>-drop behaviors exist for a component, the context menu <strong>and</strong> <strong>to</strong>oltip will be activated on<br />

tap+hold+finger-up. Only single finger gestures can be used <strong>to</strong> active context menus <strong>and</strong> <strong>to</strong>oltips.<br />

Drag <strong>and</strong> Drop on Touch Devices<br />

On <strong>to</strong>uch devices like tablets, the component drag-<strong>and</strong>-drop has a different gesture than with the mouse. An<br />

item that can be dragged must be activated by a tap-<strong>and</strong>-hold gesture. The item will change its appearance<br />

<strong>to</strong> indicate that it can be dragged once held long enough. The same gesture applies <strong>to</strong> reordering table<br />

columns. Tap-<strong>and</strong>-hold the column header, then drag it <strong>to</strong> reorder the columns.<br />

Developing With Other Frameworks<br />

When developing with <strong>WebCenter</strong> Content, Sites or Portal you can incorporate your own <strong>to</strong>uch events if ADF Mobile<br />

does not provide the functionality out of the box, <strong>and</strong> you are looking for a lightweight solution <strong>to</strong> be supported<br />

across devices.<br />

o<br />

jQuery & Zep<strong>to</strong>JS<br />

jQuery is a fast <strong>and</strong> concise JavaScript Library that simplifies HTML document traversing, event h<strong>and</strong>ling,<br />

animating, <strong>and</strong> Ajax interactions for rapid web development <strong>and</strong> is one of the most widely used Javascript<br />

Frameworks. Currently it does not have support for <strong>to</strong>uch guestures (although you can write your own <strong>and</strong><br />

there are a wide variety of <strong>to</strong>uch plugins available).<br />

If you are planning <strong>to</strong> support mobile devices, I would recommend taking a look at Zep<strong>to</strong>JS; it is a minimalist<br />

JavaScript framework for modern web browsers, with a jQuery-compatible syntax aimed <strong>to</strong> support mobile<br />

devices <strong>and</strong> also incorporates a number of Touch Guestures – [tap, doubleTap, swipe, swipeLeft, swipeRight,<br />

swipeUp, swipeDown, pinch, PinchIn, PinchOut].<br />

© 2012. Fishbowl Solutions, Inc.


o<br />

Other Libraries<br />

There are other libraries available like SenchaTouch, jQueryMobile, jQTouch, etc., however these provide a<br />

UI with the library for mobile devices <strong>and</strong> are not recommended for cross platform interfaces such as<br />

desk<strong>to</strong>p <strong>to</strong> Mobile.<br />

Example of some of the st<strong>and</strong>ardized <strong>to</strong>uch gestures<br />

o<br />

Touch points Device Support t <strong>and</strong> Considerations<br />

In iOS you can capture 11 points of simultaneous contact with the device (The eleventh is a mystery <strong>to</strong><br />

everyone…)<br />

Other operating systems capture a lot less, although this is improving. Currently when designing for <strong>to</strong>uch<br />

interfaces I do not apply support for more than 2 simultaneous <strong>to</strong>uch interaction due <strong>to</strong> support across<br />

devices <strong>and</strong> the requirements. I have never come across a requirement for more than this, <strong>and</strong> limiting <strong>to</strong> 2<br />

<strong>to</strong>uches also allows me <strong>to</strong> also reuse my methods for Motion Events as I st<strong>and</strong>ardize using only 2 h<strong>and</strong>s <strong>to</strong><br />

interact with a screen.<br />

The main JavaScript Touch Events are:<br />

• Touchstart – fires once<br />

• Touchmove – fires continuously<br />

• Touchend – fires once<br />

© 2012. Fishbowl Solutions, Inc.


• Example:<br />

element.addEventListener("<strong>to</strong>uchstart", myTouchFunction, false);<br />

Do not use iOS gesture events unless you are only developing for iOS. IMPORTANT gesture events are not<br />

supported on any other OS device. [gesturestart, gesturechange, gestureend].<br />

For information on available <strong>to</strong>uch event use the Apple site is a great resource, just keep in mind it’s<br />

designed for iOS <strong>and</strong> may not work in Android or other platforms.<br />

https://developer.apple.com/library/IOs/#documentation/AppleApplications/Reference/SafariWebContent/Han<br />

dlingEvents/H<strong>and</strong>lingEvents.html<br />

Experimental Motion activated Gestures (Kinect)<br />

Spacial Operating Environment<br />

This is more for the innova<strong>to</strong>rs <strong>and</strong> hackers out there.<br />

Browsers are now escaping the st<strong>and</strong>ard Mouse <strong>and</strong> keyboard conventions as we can see with <strong>to</strong>uch interactions<br />

<strong>and</strong> voice integrations. There are also new API specifications <strong>and</strong> implementations using the latest cus<strong>to</strong>m browser<br />

builds (Chrome <strong>and</strong> Firefox) such as the Gamepad API, Mouse Lock API, <strong>and</strong> Full Screen API - you can review the<br />

draft gamepad API here –<br />

http://dvcs.w3.org/hg/webevents/raw-file/default/gamepad.html.<br />

A fun preview of this can be seen here using a wireless <strong>Xbox</strong> controller <strong>to</strong> interact with a browser -<br />

http://vimeo.com/31906995<br />

Minority Report 2002 Conceptual Touch Screen Innovation<br />

For those of you that have seen Minority Report, Tom Cruise navigates through a set of enormous screens of the<br />

future by gesturing his h<strong>and</strong>s through the air.<br />

The Kinect sensor within the latest Microsoft <strong>Xbox</strong> console works in a similar way.<br />

A camera sensor plugs in<strong>to</strong> the console or PC via a USB input enabling users <strong>to</strong> interact with an interface via a<br />

motion gesture.<br />

© 2012. Fishbowl Solutions, Inc.


How does this Work<br />

There are currently no browser APIs available for the Kinect; however, there are two ways I have managed <strong>to</strong><br />

integrate the Kinect with the browser.<br />

1. WebSocket Push<br />

1. Plug the Kinect in<strong>to</strong> your PC or Mac.<br />

2. Install the Kinect drivers OpenNI <strong>and</strong> NITE<br />

3. <strong>From</strong> device manager you will see you Kinect is now recognized.<br />

4. Setup a WebSocket server (I use nodeJS)<br />

(If you prefer, you can setup nodeJS <strong>to</strong> run directly on the clients machine within a FireFox XUL Extension -<br />

a tu<strong>to</strong>rial can be found here so an external server is not required.)<br />

http://rawkes.com/blog/2011/12/05/running-node.js-from-within-a-firefox-xul-extension<br />

a.) WebSocket Server plugin for WebLogic is soon <strong>to</strong> be released.<br />

5. I wrote a small C++ app that passes the information from the Kinect <strong>to</strong> the socket server.<br />

6. On the webpage I use the socket.io.client JS Lib that opens a connection <strong>to</strong> the WebSocket server.<br />

7. I pass an array X,Y,Z co-ordinates from my C++ app connected <strong>to</strong> the Kinect through the WebSocket<br />

gateway that transfers the information <strong>to</strong> the browser in realtime allowing me <strong>to</strong> create my own gestures<br />

<strong>and</strong> integrations with HTML5 elements like canvas.<br />

2. DepthJS<br />

DepthJS in action interacting with The New York Times Website<br />

Students at the Massachusetts Institute of Technology have gone further, inventing DepthJS, a browser extension<br />

(currently Chrome & Safari) that allows the Microsoft Kinect <strong>to</strong> talk <strong>to</strong> any web page. It provides the low-level raw<br />

access <strong>to</strong> the Kinect as well as high-level h<strong>and</strong> gesture events <strong>to</strong> simplify development. These allow the Kinect <strong>to</strong><br />

recognise h<strong>and</strong> <strong>and</strong> finger motions, allowing users <strong>to</strong> surf the internet <strong>and</strong> ''h<strong>and</strong>le'' computer files. And, unlike Tom<br />

Cruise's character in Minority Report, no gloves are required.<br />

© 2012. Fishbowl Solutions, Inc.


1. Plug the Kinect in<strong>to</strong> your PC or Mac.<br />

2. Install the Kinect drivers OpenNI <strong>and</strong> NITE<br />

3. Setup Browser<br />

a.) Chrome, Firefox install the FireBreath plugin + DepthJS<br />

b.) Safari install the DepthJS extension<br />

c.) No support currently for IE<br />

4. Include the depthjs lib on your page.<br />

5. A demo can be seen here with a few sample events –<br />

https://github.com/doug/depthjs/blob/master/developer-api/BasicDemo.html<br />

Kinect for Windows<br />

If you know C++, Windows provides a great resource for their open SDK allowing you <strong>to</strong> integrate with the Kinect<br />

Voice Recognition engine <strong>and</strong> more – You can find information on this here –<br />

http://www.microsoft.com/en-us/kinectforwindows/develop/<br />

Conclusion<br />

For the last 30 years the mouse <strong>and</strong> keyboard have been the main input devices for interacting with desk<strong>to</strong>p<br />

interfaces. While other technology such as graphics cards, processors, <strong>and</strong> network infrastructures have<br />

significantly evolved during this period, it is only within the last couple of years that we have stepped out of the<br />

confinement of mouse-<strong>and</strong>-keyboard interactions <strong>and</strong> have begun <strong>to</strong> look at how other input devices can help<br />

improve our interaction with data. Touch integrations <strong>and</strong> spatially aware devices that were not even conceivable a<br />

decade ago will now let us push user experience <strong>to</strong> the next level.<br />

© 2012. Fishbowl Solutions, Inc.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!