“Live Coding extends(Vision Factory)” Workshop

7 January 2009 at 11:56 am (3D, First ones, Hypermedia, Live Coding, openframeworks, Processing.org, School of Art)

2008 ended with style : a full week workshop around Julien V3GA‘s Vision Factory API.

LiveCoding is an extension of Vision Factory framework (used by Julien V3GA in his studio for professional purposes and for Vjing in a personal purpose) and exists in two different “beta”-versions : 
● On a single computer;
● Up to 5 computers (Up to 4 computers using Processing plus an extra one where the API actually runs).

In order to use Vision Factory, you have to edit a javascript file where you enter your code lines. When you save the javascript file, SpiderMonkey re-interprets the script and sends a “bug report” to Vision Factory. If no bugs were found, the script is validated and displayed, else the previous script is kept running and Vision Factory stands by for a new/corrected bugs-free version of the JavaScript file.

We use pre-programmed Vision Factory functions as base structures for the javascript files. The most basic ones are equivalent to the Processing‘s or OpenFramework‘s “void setup()”, “void update()” and “void draw()” functions. The init() -equivalent of setup()- is run only once, but it can be recalled in any moment by the programmer. Both update() and render() -equivalent of draw()- are loops and work like in OpenFrameworks. Here you can find some new features recently added by Julien V3GA.


Video from Julien V3GA

Another characteristic of Vision Factory is the fact that it was built on a layer structure concept. Vision Factory has a built-in OSC protocol management. It is simple to get access to layer proprieties and to change them using OSC messages. In the network version, each computer gets assigned to a layer (their javascript is assigned to the specific layer). Using several computers, each computer receives a layer, and like in photoshop, the higher layers mask the lower ones. Each computer runs a Processing client that sends the script whenever it is saved, Vision Factory receives them and treats them like the non-network version. Finally, we can control Vision Factory’s final render : splitting the screen, giving a render for each JavaScript or assembling the different layers into one unique screen, having superpositions from the different scripts.

Vision Factory is currently not released.

LiveCoding

Julien V3GA also showed us an iPhone/iPod Touch application called Mrmr that he uses when he makes some Vjing stuff. The aplication allows one to configure an interface and send data using Open Sound Control. The devices (iPhone/iPod touch) interface (multitouch screen) is highly adapted to control sound and visualization’s parameters (like a MIDI controller) with the advantage is that you can do it wireless.

Considering all this, I am more then willing to start a project to implement Open Sound Control into a Blackberry mobile phone (if someone heard about an opensource project on this direction, please leave me a comment presenting this project). The advantage of a Blackberry over other phones is the full (and comfortable) QUERTY/AZERTY keyboard. Allowing you to type fast enough to make LiveCoding. The first stage is to implement OSC, then to make a programming interface (the built-in notepad?) to finally add the command features (connect to server, reload the “setup()” function of the script, disconnect, open a script, etc).

Well, I believe that I have some work to do now…

Permalink 2 Comments

Sans titre 1.0

2 November 2008 at 2:32 pm (Hypermedia, openframeworks, School of Art, Wiimote) ()

Sans_titre_1.0 (or in English Untitled_1.0) is the first version of an interactive game-art installation. Visitors have to shake a teddy bear and to scream as loud as it can in order to get a maximum amount of points and blood drops. All this within a 20 seconds time window (and believe me, it’s really physical).

This installation is a reflexion about violence and anger. My objective here is to provide a liberation experience. With this experience I hope to also provide a reflexion on the reason why I made this game.

The game is still in its early development state. There are therefore new versions coming up soon.

An article was written in a local newspaper (La Marseillaise – October 27 2008) :

Permalink Leave a Comment

Interesting interview

31 October 2008 at 1:34 pm (Books, Hypermedia)

Here is an interview where Toby Segaran presents his book Programming Collective Intelligence, what is probably one of my next acquisitions. So please enjoy :

Permalink Leave a Comment

New 3D modelisation tool by Microsoft

23 August 2008 at 10:37 am (3D, Hypermedia, Links)

I have just read a New York Times  article presenting a new “free” 3D modelisation tool, called Photosynth, developed by Microsoft. To have a 3D photorepresentation of an real object or space you have to take as many pictures as you can from the object (with a minimum of 3 pictures per area). Then the software compares the pictures (I belive the logic used in this operation is similar of the one used by photoshop to produce panoramic images from different shots ; Microsoft added some algo to calculate distance by comparing object sizes) and you have a weird animation where you see the actual pictures and are able to navigate arround, zoom in and out.

The problem is, some month ago, I’ve read another article, this time in Wired website, presenting another tool that do exactly the same (you can find it here in my blog). The only difference is Microsoft keeps the image data to have a colorfull representation while Washington University’s tool seems to represent the vertex of the spoted object.

Permalink 1 Comment

Todaysart 2008

5 June 2008 at 2:54 pm (Exhibition, Hypermedia, Mechatronics, Sound)

The TodaysArt Festival 2008, which will take place on the 26th and 27th of September in the city centre of The Hague, closes the Dutch festival season and primes the cultural season, bringing over 300 artists to The Hague, from all four corners of the world, for a weekend of unique and cutting edge artistic showcases. Talented explorers invite you to witness their own personal take on today´s creativity.

Extrait from the festival’s site. More info coming soon.

Permalink Leave a Comment

ARS Electronica 2008

5 June 2008 at 2:49 pm (Exhibition, Hypermedia, Links, Mechatronics, Sound, Videos)

 

Text from the festival’s press release :

In 2008, the Ars Electronica Festival is scrutinizing the value of 

intellectual property and thereby facing one of the core issues of our modern knowledge- 

based society: that of freedom of information vs. copyright protection, big profit-making 

opportunities vs. the vision of an open knowledge-based society that seeks to build its 

new economy on the basis of creativity and innovation. And beyond that, we want to 

hammer out practical, workable rules to govern this new reality. 

 

The 2008 Ars Electronica Festival. September 4 to 9. In Linz. 

http://www.aec.at/culturaleconomy 

The 2008 Ars Electronica Festival 

If Old Europe’s future prosperity truly is to be built upon creativity and innovation, then 

the free flow of knowledge is indispensable. Innovative business ideas and new 

marketing channels cannot be left to choke amidst a regulatory jungle enacted by 

individual nation-states or left up to the management practices of monopolists. Under the 

banner of “A NEW CULTURAL ECONOMY – When Intellectual Property Runs Up against Its 

Limits,” the 2008 Ars Electronica Festival aims to co-author the preamble of this new 

knowledge-based society. What’s at stake: the interplay of freedom of information and 

copyright protection, of big profit-making opportunities and the vision of an open 

knowledge-based society. And the fact that we still lack practical, workable regulations 

governing this new reality, rules whose formulation ought not to be left up to lawyers 

and MBAs alone. 

Ars Electronica is inviting artists, network nomads, theoreticians, technologists and legal 

scholars from all over the world to convene in Linz September 4-9, 2008. Their artistic 

and scientific findings will be presented at symposia, exhibitions, performances and 

interventions staged in settings that go beyond classical conference spaces and cultural 

venues to permeate the cityscape at large. And as a final test-run before Linz’s European 

Capital of Culture year in 2009, this production will heavily emphasize the interaction of 

our local network of cultural facilities and educational institutions. 

 

 

The 2008 Ars Electronica Symposium 

The computer and the Internet have tremendously accelerated the production and 

dissemination of information while slashing their price in the bargain. Suddenly, content 

is accessible worldwide. This has not only modified the way we deal with information; it 

has produced a shift in our whole economic system. We are being forced to adapt 

traditional conceptions to a changed technological reality. Some of us are already doing 

so quite successfully; others are resisting—and failing. This year’s symposium will 

connect up application-users, artists, entrepreneurs, scholars and politicians, and provide 

an opportunity for them to get jointly geared up for what’s ahead. 

The 2008 Ars Electronica Symposium is being curated by Joichi Ito (J). Activist, 

entrepreneur and venture capitalist, Joichi Ito is founder and CEO of NEOTENY, a venture 

capital firm that specializes in personal communications and basic technology. He has 

started up numerous Web enterprises including PSINet Japan, Digital Garage and 

Infoseek Japan. In 2001, the World Economic Forum named him to its list of 100 Global 

Leaders for Tomorrow. As CEO of Creative Commons and a member of the board of 

ICANN, WITNESS and TECHNORATI et al., Joichi Ito is actively involved in cutting-edge 

Web 2.0 developments. Detailed info about Joichi Ito and Creative Commons is available 

online at http://joi.ito.com/ 

Festival site HERE.

 

Permalink Leave a Comment

Laboratoire des Fictions

2 June 2008 at 9:13 pm (Hypermedia, Mechatronics, Processing.org, School of Art, Sound, Videos)

 

The second semester of my academic year turned around a “group project” called “Laboratoire des Fictions”.

We had to get together to think on the fiction laboratory topic. The project ends up with a public presentation in the Studio space (in Aix en Provence School of Art).

We (the class) had to think on different art works and space arrangement in this big room that is the Studio. Each of us presented projects, ideas, concepts, definitions and we debated on theme in order to get a coherent set.

We decided to build a space where the visitor is manipulated, and used as laboratory object of study. The visit starts with charming attendants dressed like nurses/scientist that gives the visitor a placebo pill explaining that the active substance in the pill will make theme feel never before felt sensations. Then the visitor passes through a set of two corridors built with elastic fabrics with the objective to disturb the visitor’s senses. The visitor passes through a third corridor where the soil is soft, the left wall captures the visitors’ shadows and the right wall (nearly invisible because of the darkness)  keeps falling on the visitor’s head. This third corridor leads the visitor to the main room where several installations are establish. The first installation the visitor sees is an anamorphosis activated by a giant hamster wheel where visitors are invited to run. In the same room, the visitor can see the other side of the falling wall, this side of the wall has mirrors, when it falls (this time away from the visitor) our perception of verticality is disturbed and we feel like if we were falling. Another installation is a set of green neon lights that turn off after some time in order for the visitor to see the “Vie en rose” (there is an effect of persistence of vision, when the green light is switched off, the eyes have a adaptation time and you see everything in the opposite color – pink). When the neon light is turned off, we can hear a specialized quadriphonic mix of several versions of  “La vie en rose”. On the floor, a crazy robot drives around avoiding walls and visitors, presenting bugs and errors, annoying people. Finally, a door, in the center of the room, is used to play a Tetris game projected on a waterwall. The waterwall is also the exit. The visitor has to choose between waiting for a random waterfall stop, playing the game to see if it makes the water stop or just cross it with water anyway. Once outside the room, the visitor could find an evaluation machine called “I.D.I.O.” (in french it means “stupid”), where, during a performance, some of the visitors are randomly chosen to get evaluated by a group of (crazy) scientistes.

Since it’s hard to explain the exhibition with words, here are some videos showing it :

Here is a 3D animations made by Floriane Rebatue:

My work in this project is the Tetris waterwall projection. I have designed, built and programmed a big part. The game was programmed using Processing, the door has sensors linked to a PIC 16F876. The waterwall is 4 meters width and 3 meters high, it’s a closed circuit using a water-pump to inject water in a pipe with 1500 holes (1mm diameter). The collector is 4 meters width (in the central part) and is 1 meter large. A small bridge allows visitors to cross the water. The sound is generated by Pure Data (thanks to François Parra). I also got help from Jane, Pierre Loup and Florent.

Here are some sketches :

Here is a video :

Here you can find the code source (addapted to use with a computer keyboard and with a PIC) – Requires oscP5 (project site) library and Processing

Here is the PureData patch (activation of the OSC Library and PureData required).

Permalink Leave a Comment

openFrameworks

26 April 2008 at 2:31 pm (Hypermedia)

OpenFrameworks is the next generation open source, cross platform, c++ library, which was designed by Zachary Lieberman (US).
The library is still unreleased (now in pre-release), but their site is already operational.

While we wait here is a video about openframeworks:

made with openFrameworks.

PS: WordPress seems to be hostile to videos that are not in youtube or google videos… so I just guive you the link to the video… :(

Permalink Leave a Comment

Human-Machine interaction and interface

15 April 2008 at 11:26 am (Hypermedia, Links, Mechatronics)

I have recently read at Digital Tools a post presenting the Don’t Click It website and an article about the QWERTY keyboard. Accessing the Don’t Click It website for the first time was a strange experience: I had to get used with the idea of not using the click button, at the beginning it was quite annoying. The site has some interesting data about the click culture pointing its starting point as being a technical issue when computers and navigation were particularly poor. But clicking did enter into our contemporary culture and does represents a voluntary action, unlike Don’t Click It presents this gesture when speaking about Spam, banners and annoying advertisings. If we should follow Don’t Click It literally, we should not use the keyboard anymore, since the mouse is an extension of the keyboard (it’s like the arrow keys for navigation and return key for the click, and a computer can work with no mouse but can’t work with no keyboard). The mouse is just a pointer on the screen space allowing the user to navigate easily.

Real computer mouse

I started then to think about the possibility of building a click free interface. I started then to build a small sketch using Processing based on the Don’t Click It propositions: gesture reading and time control (you will see the beta version soon here). Once I got it done, I realized that click free interface might not be the best for speed performance: the gesture is way bigger and you need to learn each gesture – that could change from one interface to another. And I suddenly realized: Palms, PDA and Pocket PC already used mouse free interfaces, and sometimes even keyboard free. I have a Palm Zire 71 (old school nearly) and I remember learning to write with the pen and learning all the shortcuts wasn’t easy. I always had to open the reminder application to remember how the gesture should be done (and sometimes I just couldn’t reproduce it). As a result: I notice that I type faster with a keyboard then doing all kind of gesture that approach in a very poor way handwriting (I write so bad on paper, I don’t know how teachers do to correct my essays or even read theme). Then I thought, but palms are not the only devices that are mouse free, we have mobile phones and tablet PCs. And I remembered when I was teenager I used to text message a lot, so much that I could type a text message faster then on a regular keyboard (I used to take class notes on my cellphone). But today I type faster on a keyboard then on a mobile phone, I think it’s a meter of training and practice – and a cellphone keyboard and a computer keyboard are quite similar in concept: keys pressed to reach a symbol. Then I thought on video games, they do not use keyboards… Well somehow, yes, they are. We press the control buttons to make an action/movement.

And when I was thinking that mouse was the black sheep of Human-Machine interaction (translating a real movement to a screen movement) I realized that I totally ignored a brand new trendy object: Nintendo’s Wiimote: thanks to its accelerometers and to its built in infrared camera you can use it without pressing any buttons. Playing Zelda for instance, you just need to grab your Wiimote like a sword and your Nunchuck as a shield . In Wii Sports, you play box like if it was for real. Those gestures are way more easy then the old school key combinations (like for instance Up – Up – Left – A – Right – L1 – L2 – Left – R1 – A – Start – Select – Down – X – O – A – B – R2 – L1 so you can jump backwards).

PS: No buttons were pressed during the footage of this video.

I don’t think that Don’t Click It is a bad research, I agree that it’s interesting for future technologies to think what are the possible solutions to avoid to use the present devices and standards. Some researches might get us more practical solutions then just moving the problems. I’m thinking on mind controlled computers.

Permalink Leave a Comment

22 February 2008 at 4:56 pm (Exhibition, Hypermedia, Links, Sound, Videos)

This morning was marked by two conferences and a performance in De Bali.

The first conference “The Diorama Revisited”, presented by Erkki Huhtamo, treated about Diorama and many “ama” ending words (like panorama, diaporama, futurama…) history.

You can find here videos from the performance “Digit”, done by Julien Maire, where Maire printes sentences passing his finger over white paper. He uses the words as lines to draw.

The third morning conference was a round table about yesterday’s drone performance. The participants were with Stephen O’Malley, Joachim Nordwall and CM von Hausswolff, moderated by Mike Harding.

The afternoon started with the conference “INTERACTIVITY AND IMMERSION” held by Jeffrey Shaw and Marnix de Nijs.

Jeffrey Shaw presented different technologies to produce images providing an immersion experience and the ways to interact with this devices. He mainly focus his conference around the iCinema center. He presented Cave immersion (projections on the wall, roof and floor) and cylinder immersion environment (the viewer is in the center of a cylinder, the images are projected on the cylinder wall’s) and spherical modular video cameras (cameras that films 360°).

Marnix de Ni presented some of his works:

Exercise in immersion is a 3D immersion experience game where the user wares a suit to travel inside a virtual world superimposed over the real space. The player is free to move around, interactivity is controlled by it’s movements.

Beijing accelerator is an interactive installation with a rotating video projection. The viewer sits on a rotating chair with a joystick (that controls the chair rotation). The objective is to syncronize the chair with the image.

Run motherfucker run is an interactive installation inviting the visitor to run within one of the 25 scenes mostly shot at night in the Rotterdam area. The device, a roller carpet, tends to slow you down by increasing running resistance. This piece is about adrenaline and the expirience of speed.

You can find this post http://www.sonicacts.com/wordpress/?p=109 too.

Permalink Leave a Comment

Next page »