“Live Coding extends(Vision Factory)” Workshop

7 January 2009 at 11:56 am (3D, First ones, Hypermedia, Live Coding, openframeworks, Processing.org, School of Art)

2008 ended with style : a full week workshop around Julien V3GA‘s Vision Factory API.

LiveCoding is an extension of Vision Factory framework (used by Julien V3GA in his studio for professional purposes and for Vjing in a personal purpose) and exists in two different “beta”-versions : 
● On a single computer;
● Up to 5 computers (Up to 4 computers using Processing plus an extra one where the API actually runs).

In order to use Vision Factory, you have to edit a javascript file where you enter your code lines. When you save the javascript file, SpiderMonkey re-interprets the script and sends a “bug report” to Vision Factory. If no bugs were found, the script is validated and displayed, else the previous script is kept running and Vision Factory stands by for a new/corrected bugs-free version of the JavaScript file.

We use pre-programmed Vision Factory functions as base structures for the javascript files. The most basic ones are equivalent to the Processing‘s or OpenFramework‘s “void setup()”, “void update()” and “void draw()” functions. The init() -equivalent of setup()- is run only once, but it can be recalled in any moment by the programmer. Both update() and render() -equivalent of draw()- are loops and work like in OpenFrameworks. Here you can find some new features recently added by Julien V3GA.


Video from Julien V3GA

Another characteristic of Vision Factory is the fact that it was built on a layer structure concept. Vision Factory has a built-in OSC protocol management. It is simple to get access to layer proprieties and to change them using OSC messages. In the network version, each computer gets assigned to a layer (their javascript is assigned to the specific layer). Using several computers, each computer receives a layer, and like in photoshop, the higher layers mask the lower ones. Each computer runs a Processing client that sends the script whenever it is saved, Vision Factory receives them and treats them like the non-network version. Finally, we can control Vision Factory’s final render : splitting the screen, giving a render for each JavaScript or assembling the different layers into one unique screen, having superpositions from the different scripts.

Vision Factory is currently not released.

LiveCoding

Julien V3GA also showed us an iPhone/iPod Touch application called Mrmr that he uses when he makes some Vjing stuff. The aplication allows one to configure an interface and send data using Open Sound Control. The devices (iPhone/iPod touch) interface (multitouch screen) is highly adapted to control sound and visualization’s parameters (like a MIDI controller) with the advantage is that you can do it wireless.

Considering all this, I am more then willing to start a project to implement Open Sound Control into a Blackberry mobile phone (if someone heard about an opensource project on this direction, please leave me a comment presenting this project). The advantage of a Blackberry over other phones is the full (and comfortable) QUERTY/AZERTY keyboard. Allowing you to type fast enough to make LiveCoding. The first stage is to implement OSC, then to make a programming interface (the built-in notepad?) to finally add the command features (connect to server, reload the “setup()” function of the script, disconnect, open a script, etc).

Well, I believe that I have some work to do now…

Permalink 2 Comments

Laboratoire des Fictions

2 June 2008 at 9:13 pm (Hypermedia, Mechatronics, Processing.org, School of Art, Sound, Videos)

 

The second semester of my academic year turned around a “group project” called “Laboratoire des Fictions”.

We had to get together to think on the fiction laboratory topic. The project ends up with a public presentation in the Studio space (in Aix en Provence School of Art).

We (the class) had to think on different art works and space arrangement in this big room that is the Studio. Each of us presented projects, ideas, concepts, definitions and we debated on theme in order to get a coherent set.

We decided to build a space where the visitor is manipulated, and used as laboratory object of study. The visit starts with charming attendants dressed like nurses/scientist that gives the visitor a placebo pill explaining that the active substance in the pill will make theme feel never before felt sensations. Then the visitor passes through a set of two corridors built with elastic fabrics with the objective to disturb the visitor’s senses. The visitor passes through a third corridor where the soil is soft, the left wall captures the visitors’ shadows and the right wall (nearly invisible because of the darkness)  keeps falling on the visitor’s head. This third corridor leads the visitor to the main room where several installations are establish. The first installation the visitor sees is an anamorphosis activated by a giant hamster wheel where visitors are invited to run. In the same room, the visitor can see the other side of the falling wall, this side of the wall has mirrors, when it falls (this time away from the visitor) our perception of verticality is disturbed and we feel like if we were falling. Another installation is a set of green neon lights that turn off after some time in order for the visitor to see the “Vie en rose” (there is an effect of persistence of vision, when the green light is switched off, the eyes have a adaptation time and you see everything in the opposite color – pink). When the neon light is turned off, we can hear a specialized quadriphonic mix of several versions of  “La vie en rose”. On the floor, a crazy robot drives around avoiding walls and visitors, presenting bugs and errors, annoying people. Finally, a door, in the center of the room, is used to play a Tetris game projected on a waterwall. The waterwall is also the exit. The visitor has to choose between waiting for a random waterfall stop, playing the game to see if it makes the water stop or just cross it with water anyway. Once outside the room, the visitor could find an evaluation machine called “I.D.I.O.” (in french it means “stupid”), where, during a performance, some of the visitors are randomly chosen to get evaluated by a group of (crazy) scientistes.

Since it’s hard to explain the exhibition with words, here are some videos showing it :

Here is a 3D animations made by Floriane Rebatue:

My work in this project is the Tetris waterwall projection. I have designed, built and programmed a big part. The game was programmed using Processing, the door has sensors linked to a PIC 16F876. The waterwall is 4 meters width and 3 meters high, it’s a closed circuit using a water-pump to inject water in a pipe with 1500 holes (1mm diameter). The collector is 4 meters width (in the central part) and is 1 meter large. A small bridge allows visitors to cross the water. The sound is generated by Pure Data (thanks to François Parra). I also got help from Jane, Pierre Loup and Florent.

Here are some sketches :

Here is a video :

Here you can find the code source (addapted to use with a computer keyboard and with a PIC) – Requires oscP5 (project site) library and Processing

Here is the PureData patch (activation of the OSC Library and PureData required).

Permalink Leave a Comment

Touching sound…

21 February 2008 at 12:06 pm (Hypermedia, Processing.org, Sound, Un-Usual Post, Wiimote)

With the development of technology, touch interface became fashion in today’s geek society.Jonny Lee (previously quoted in this post) developed his own multi touch device using a Wiimote, here is a video explaining it: Touching tools appear for all kind of things, video games, palmtops, mobile phones, DJ devices, remote controls, cash redraw screens, etc.This technology first appear for a single touching point, but now, we start to see multi touch technology – with more then a single point.Here is a video from a group called iBand using a Nintendo DS and two iPhones as instruments.

 

The idea is not new, other tools using touch, and multi touch interfaces exist, like Korg’s KaosPadmini kp and Kaossilator.  Groups of artists also worked with touch as an interface. It is the case for Ractable, the video speaks by itself, so check it out: 

Permalink Leave a Comment

Gamerz 2.0

4 February 2008 at 9:13 pm (Circuit Bending Workshop, Exhibition, Games, Hypermedia, Mechatronics, Processing.org, Sound)

I didn’t had time to write about the Gamerz 2.0 exhibition, so here I am trying to fix this…

First, Antonin Fournaud and Manuel Braun’s Patch&KO, a mod of Street Fight II introducing a control device where you must loose control to be able to play. The device is basically an hybrid between a bean game, a Pachniko and a marble machine using iron balls in a pin field making electric contacts. Each contact may be transformed in an action (like hit, jump, etc.). Here is a video showing it in action:

Servovalve presented a “worm” version of Carbone: a software that copies an image (a face to be precise) in a random mode.

Damien Aspe built a real and colorful Tetris wall called From Russia with fun:

Guillaume Stagnaro presented a piece called XOX, two robots playing Tic-tac-toe programmed to never loose and never win. In this situation, the only way to win is not to play.

Grégoire Lauvin presented Weith Contest, a multiplayer music game where the gameplay is based on weight. The heaviest measure plays the sample.

Pierrick Thébault (from L16) made a cool hack from CyWorld making a porn version called CyPorn.

The night finished with a live musical performance by Confipop and Sidabitball using Game Boys as instrument to generate sounds and images.

More information about the works presented here and the ones I didn’t mention here.

Permalink Leave a Comment

Growing algorithm – the result of Sound/Hypermedia AOC

2 February 2008 at 10:52 pm (Hypermedia, Processing.org, School of Art, Sound, Wiimote)

Last week was my last AOC Sound/Hypermedia class, here is a version of our (un)finished group work done during this course… In this project I was with Marie Fontanel, Hong Seong Hye and Aurélie Loffroy

The idea comes from tree barks… Here are some starting images:

Designed by Marie FontanelDesigned by Marie Fontanel

Designed by Marie Fontanel

We first tried to reproduce the nodes shapes using some spiral functions.

You can see the animation here.
You can see the code source in two parts : first and second part.
Built with Processing

Then we decided to put drawing rules (inspired on Game of life) : we draw using the pixels[], if the position where the pixel is going to be drawn is occupied, then the pixel searches the fastest (and easiest) way to go around the occupied space, finaly if there is no solution to go around the occupied space, then the pixel chooses a random empty position. The code is still unfinished, I guess that later I’ll change it in order to add some new rules….

We integrated the Wiimote to use it as a rubber, when you shake the Wiimote the image gets erased. To connect the Wiimote with Processing, we use a software called OSCulator. For the sound part, we use Pure Data: Processing sends to Pure Data the pixels’ orientation (its angle) which modulates some sound samples (made thanks to Audacity). Both OSCulator and Pure Data use OSC protocol to communicate with Processing. Here you can find the OSC library.

You can download the complete set here.

Permalink Leave a Comment

Gamerz 02

9 January 2008 at 9:12 am (3D, Exhibition, Games, Hypermedia, Mechatronics, Processing.org, Sound, Videos, Wiimote)

Gamerz 02, an exhibition about artistic experimental video games/game culture, organized by Collectif M2F Créations will be held in Aix en Provence in January 2008 from the 15th until the 27th. Here you can find a french descrpition of this exhibition.

M2F Créations

Coming up… More content about this exhibition with photos, videos and comments…

Permalink Leave a Comment

Geekequation…

8 January 2008 at 12:23 am (3D, Hypermedia, Links, Processing.org, Wiimote)

TGS 2005 + Nintendo + Bluetooth + Johnny Lee + CES 2008 + Alienware = Head Tracking for Desktop VR ultra panoramic displays

Ok… This is not clear at all…

Here are the explanations about this post:

As some of you may know, Nintendo Wii‘s control, the Wiimote, was presented by Nintendo in the 2005 edition of the TGS (Tokyo Game Show). I’ve spoken many times about the Wiimote here, but this small industrial object represents a huge advance in man-machine interaction. The best of all, it uses Bluetooth to link with the Wii. In fact the Bluetooth signal used by the Wiimote is not encoded, allowing other Bluetooth devices, like a computer for instance, to receive informations. That allows us to make programs using the Wiimote. This is one of the objectives of the AOC classes in Hypermedia.

Reading the Digital Tools blog, I’ve discovered the work of Johnny Chung Lee, an Ph.D. Graduate Student in Human-Computer Interaction Institute at the Carnegie Mellon University. Among his works, I’ve been captivated by the head tracking device for VR (Virtual Reality) desktops. Here is a vide, way more explicit then any text:

In the 2008 edition of the CES (Consumer Electronics Association), Alienware (a company that produces high performance computer systems) presented a ultra widescreen display. Here is a video showing it in action:

Now, imagine both videos working together… Nice… :)

This head tracking device made me think on a Philips project : the WOW vx 3D presented in this year CES, making from a flat screen a 3D experience. Now, only personal experience can say what of this two solutions can provide a high end 3D feeling.

Here a link to some 3D videos for the Philips system.

Permalink Leave a Comment

Processing: the released books.

12 December 2007 at 12:47 pm (Books, Hypermedia, Links, Processing.org)

Since this summer, two books about Processing code language have been released. The first one was Ira Greenberg‘s approach with the book Processing, Creative Coding and Computational Art, edited by Friendsoft. The second one, edited by the MIT Press, is Processing, A Programing Handbook for Visual Designers and Artists, presents Casey Reas and Bern Fry‘s approach on code learning. There is another book about Processing : Built with Processing written by Takeshi Maekawa (前川 峻志) – who participated to the processing.jp website- and Kotaro Tanaka (田中 孝太郎) and edited by BNN.

Since I neither read and understend Japanese nor have the Built with processing book, I’ll present my impressions and opinion about both Processing Creative Coding and Computational Art and Processing, A Programming Handbook for Visual Designers and Artists books.

Both are learning oriented books, they present and explain the code language structure and how to start building your own programs. The main and obvious difference between both books are the writing style and the pedagogic approach.

Ira Greenberg’s book has a more intuitive approach. He uses a smooth() function for all the technical stuff, showing that math, logic and programming are not that hard and boring. Reading the book you feel like Greenberg speaks with you and tries to reassure the readers. I believe that this book is a good approach for people who want to learn processing, in a creative way, with no programming skills and that are afraid of the computer world, math and programming process.

Casey Reas and Ben Fry’s book has a more technical approach, they use a more technical and formal language and style. They have an extended reference of every single function, in fact the book is an extension from their reference page from the processing website. I would indicate this book more for people who already have some programming skills or people who are not afraid with formulas and computer science and technology.

Another difference between both books is the artistic references presented. In Greenberg’s book, we have a short summary of computational art and a list of some majors artists (like Ben Fry, Jared Tarbell, and others) with a short biography and some links. In Casey Reas and Ben Fry’s book we can find some more interesting material : a preface written by John Maeda (developed Design By Numbers), a set of interviews with artists (like Jared Tarbell, James Paterson and others) about one or two pieces of their work, asking what is the piece about, why they created this piece, what software tools they used to produce the piece, why did they used those tools and why do they choose to work with software. I think this approach is interesting, allowing us to see and understand the backstage of art production.

Permalink Leave a Comment

Biennial of Lyon – Part 4

10 December 2007 at 11:07 pm (Exhibition, Hypermedia, Links, Processing.org, Videos)

Collectif 1.0.3, presented in the MAC and invited by Pierre Joseph, presents Pierre Joseph’s private computer data in the piece Planiscope.

(View of the planiscope version Dominique Blais – 22 august 2005)

This work is a poetic way to present the interface between private and public sphere in the computer world. A computer is a machine that is able to store and process data. Data is a kind of extension of the information concept. I mean, pure information is the analogical values of reality. It is a conceptualization of our perception. By putting together information in logical structures produces data. So the fact to store data is a way to freeze conceptualized information of reality in a given space-time. Therefore, a computer allows you to store a part of your reality conception. And reality conception interpreted by you is reflected in the way you store and organize data. But when you organize data, you first split it in public and private data (not always in a conscient way). A computer has a neutral position with the data that is in it. Therefore, we tend to store all kind of personal data, public and private. But computer has such a big level of neutrality that it stores and presents data from anybody to anybody, with no discrimination or selection. Internet is a computer network to store and transfer data. Therefor, connecting to Internet is the same as to open a quantic portal to the data stored to all the other computers connected to this network. So we can see Internet as the sum of each reality conceptualization. And surfing in the Internet would be the same as to reorganize this data in order to build new data, so new interpretation of reality. Bloging and rebloging is a good example of what I’m trying to explain.

Another piece of Collectif 1.0.3 is Voyage en URL.com where you can download a screen saver exposing your browser historic. Exposing the way you search information and a definition and vision of reality. The way you associate data is a reflexion of how you actually build data.

Another work with an obvious issue link is Life Sharing, by 0100101110101101.org (composed by Eva and Franco Mattes), where they shared their personal computer data in an Internet site. With a point an click graphic interface, users were able to explore their computer arborescence.


Pascal Chirol‘s recent work Hyperonyme guives an visual interpretation of data arborescence, presenting it in an explicit and ponderated graphic form. Still in the same topic, Hard Flowers is an artistic software, built with Processing, that generates a visual representation of the hard disk arborescence. Not showing the actual public and private data, but using it as parameters to build flowers.

This kind of piece shows the information network built by computers and data storage devices. This brings a reflexion to the use of social networks like Facebook, Orkut or mySpace; and to sites like Flicker and Youtube, used to store photo and video data in a public place. Web 2.0 democratizes information and information access but it reveals and exposes our way to concept reality. In fact my very specific blog is in this system, I am here and now exposing my personal vision and interpretation of many art works and exposing my own thoughts and works.

To finish, I hope you like this post, and I hope you share your comments, critics and thoughts about this.

Permalink Leave a Comment

A/V Feedback

10 December 2007 at 12:30 am (Circuit Bending Workshop, Hybrid Workshop, Hypermedia, Processing.org, School of Art, Sound, Videos)

Here is a schedule of my project in the hybrid workshop and sound/hypermedia AOC (Creation Tool Workshop)…

Basically, I see the electronic components of the computer and its structure as a filter to generate and modulate sound. Using a phone pickup coil (a kind of “mic” that is able to register electromagnetic variations some decades ago this kind of material was used to spy phone calls), I listen to the computer electromagnetic variations.

I treat the collected signal with Pure Data, where I digitalise it and send the data to processing, in order to produce a visualisation of the sounds. I want to pick up the signal straight from the video card. Since the computer treats both the audio and the video signal, we can considerate this kind of experience with a kind of A/V feedback.

This work is based on three references. First, the most obvious for me : Alvin Lucier‘s piece, I am sitting in a room (1970). In this piece, Alvin Lucier uses a room as a filter, his voice is deformed by the room acoustic proprieties. In the end, the room is seen as a music instrument.

Another obvious reference is the Feedback exhibition (held in 2007) in Gijón – Spain. Among the art works, the one of the 5voltcore piece : Shockbot Corejulio (2004), a hacked compute, where a robot produces circuit connexions directly on the video card.

Finally, once again, Nicola Collins‘ workshop about circuit bending and work with feedback is my last reference.

To finish, I’ve made a small animation as a schedule for my project, here are the details :

Animation here

Source codes: Presentation_AOC_LAB_DS_Fictions_2_c Electre Moniteur Noise Plug Pure_Data Speaker carte

Source code ziped

Built with Processing

Permalink Leave a Comment

Next page »