“Live Coding extends(Vision Factory)” Workshop

7 January 2009 at 11:56 am (3D, First ones, Hypermedia, Live Coding, openframeworks, Processing.org, School of Art)

2008 ended with style : a full week workshop around Julien V3GA‘s Vision Factory API.

LiveCoding is an extension of Vision Factory framework (used by Julien V3GA in his studio for professional purposes and for Vjing in a personal purpose) and exists in two different “beta”-versions : 
● On a single computer;
● Up to 5 computers (Up to 4 computers using Processing plus an extra one where the API actually runs).

In order to use Vision Factory, you have to edit a javascript file where you enter your code lines. When you save the javascript file, SpiderMonkey re-interprets the script and sends a “bug report” to Vision Factory. If no bugs were found, the script is validated and displayed, else the previous script is kept running and Vision Factory stands by for a new/corrected bugs-free version of the JavaScript file.

We use pre-programmed Vision Factory functions as base structures for the javascript files. The most basic ones are equivalent to the Processing‘s or OpenFramework‘s “void setup()”, “void update()” and “void draw()” functions. The init() -equivalent of setup()- is run only once, but it can be recalled in any moment by the programmer. Both update() and render() -equivalent of draw()- are loops and work like in OpenFrameworks. Here you can find some new features recently added by Julien V3GA.


Video from Julien V3GA

Another characteristic of Vision Factory is the fact that it was built on a layer structure concept. Vision Factory has a built-in OSC protocol management. It is simple to get access to layer proprieties and to change them using OSC messages. In the network version, each computer gets assigned to a layer (their javascript is assigned to the specific layer). Using several computers, each computer receives a layer, and like in photoshop, the higher layers mask the lower ones. Each computer runs a Processing client that sends the script whenever it is saved, Vision Factory receives them and treats them like the non-network version. Finally, we can control Vision Factory’s final render : splitting the screen, giving a render for each JavaScript or assembling the different layers into one unique screen, having superpositions from the different scripts.

Vision Factory is currently not released.

LiveCoding

Julien V3GA also showed us an iPhone/iPod Touch application called Mrmr that he uses when he makes some Vjing stuff. The aplication allows one to configure an interface and send data using Open Sound Control. The devices (iPhone/iPod touch) interface (multitouch screen) is highly adapted to control sound and visualization’s parameters (like a MIDI controller) with the advantage is that you can do it wireless.

Considering all this, I am more then willing to start a project to implement Open Sound Control into a Blackberry mobile phone (if someone heard about an opensource project on this direction, please leave me a comment presenting this project). The advantage of a Blackberry over other phones is the full (and comfortable) QUERTY/AZERTY keyboard. Allowing you to type fast enough to make LiveCoding. The first stage is to implement OSC, then to make a programming interface (the built-in notepad?) to finally add the command features (connect to server, reload the “setup()” function of the script, disconnect, open a script, etc).

Well, I believe that I have some work to do now…

Permalink 2 Comments

New 3D modelisation tool by Microsoft

23 August 2008 at 10:37 am (3D, Hypermedia, Links)

I have just read a New York Times  article presenting a new “free” 3D modelisation tool, called Photosynth, developed by Microsoft. To have a 3D photorepresentation of an real object or space you have to take as many pictures as you can from the object (with a minimum of 3 pictures per area). Then the software compares the pictures (I belive the logic used in this operation is similar of the one used by photoshop to produce panoramic images from different shots ; Microsoft added some algo to calculate distance by comparing object sizes) and you have a weird animation where you see the actual pictures and are able to navigate arround, zoom in and out.

The problem is, some month ago, I’ve read another article, this time in Wired website, presenting another tool that do exactly the same (you can find it here in my blog). The only difference is Microsoft keeps the image data to have a colorfull representation while Washington University’s tool seems to represent the vertex of the spoted object.

Permalink 1 Comment

Gamerz 02

9 January 2008 at 9:12 am (3D, Exhibition, Games, Hypermedia, Mechatronics, Processing.org, Sound, Videos, Wiimote)

Gamerz 02, an exhibition about artistic experimental video games/game culture, organized by Collectif M2F Créations will be held in Aix en Provence in January 2008 from the 15th until the 27th. Here you can find a french descrpition of this exhibition.

M2F Créations

Coming up… More content about this exhibition with photos, videos and comments…

Permalink Leave a Comment

Geekequation…

8 January 2008 at 12:23 am (3D, Hypermedia, Links, Processing.org, Wiimote)

TGS 2005 + Nintendo + Bluetooth + Johnny Lee + CES 2008 + Alienware = Head Tracking for Desktop VR ultra panoramic displays

Ok… This is not clear at all…

Here are the explanations about this post:

As some of you may know, Nintendo Wii‘s control, the Wiimote, was presented by Nintendo in the 2005 edition of the TGS (Tokyo Game Show). I’ve spoken many times about the Wiimote here, but this small industrial object represents a huge advance in man-machine interaction. The best of all, it uses Bluetooth to link with the Wii. In fact the Bluetooth signal used by the Wiimote is not encoded, allowing other Bluetooth devices, like a computer for instance, to receive informations. That allows us to make programs using the Wiimote. This is one of the objectives of the AOC classes in Hypermedia.

Reading the Digital Tools blog, I’ve discovered the work of Johnny Chung Lee, an Ph.D. Graduate Student in Human-Computer Interaction Institute at the Carnegie Mellon University. Among his works, I’ve been captivated by the head tracking device for VR (Virtual Reality) desktops. Here is a vide, way more explicit then any text:

In the 2008 edition of the CES (Consumer Electronics Association), Alienware (a company that produces high performance computer systems) presented a ultra widescreen display. Here is a video showing it in action:

Now, imagine both videos working together… Nice… :)

This head tracking device made me think on a Philips project : the WOW vx 3D presented in this year CES, making from a flat screen a 3D experience. Now, only personal experience can say what of this two solutions can provide a high end 3D feeling.

Here a link to some 3D videos for the Philips system.

Permalink Leave a Comment

Small push-pop exercise…

6 November 2007 at 1:25 am (3D, Processing.org)

I’ve made a small push pop matrix exercise. Using the mouse the user can rotate a 3D sphere and a rectangle linked to the sphere.
The main objective of this exercise was to try to understand the push pop concept and to learn how to use it for rotation.

Push-Pop exercise

Animation here.

Source code: sphere

Built with Processing

Permalink Leave a Comment

3D Models from several 2D photos…

4 November 2007 at 1:33 pm (3D, Links)

Weird science post about a new way to build 3D object was found by a University of Washington professor: to use pictures published in flickr made by tourists. The different points of view give data to place dots in a space. It’s an amazing technology that can be used to build totally new virtual spaces in a really different way.

Permalink Leave a Comment