Hypercube construction

27 February 2008 at 11:22 am (Mechatronics, Un-Usual Post, Videos)

I’ve decided to write this post in order to try to give an answer to Eggshell Robotic question “Tesseract / Hypercube – Mechanical Possible?“… The main reason to make a post on my blog rather then making a comment on Eggshell Robotic’s post is mainly the fact that I wrote a lot… and I use images and links… so here it is :

A hypercube is basically 8 cubes organized using the same logic as we use to build a square (2 dimensions) with strokes (1 dimension), and to build a cube (3 dimensions) with squares (2 dimensions)… We take the first cube, join its 6 faces with one face of 6 other cubes… then we go to the 4th dimension by joining all faces that are next to each other… The 8th cube is used to close the hypervolume…

It might be easier to explain it comparing to the cube construction… We have 6 squares (2D) we take one to use as center. We join 4 other squares around the one in the center… (still in 2D)… Now we rotate the 4 new squares to the upper dimension in order to link theme (now we are in 3D) but the cube is not closed, therefore we have a 6th square to close the missing face of the cube… Here is a site showing part of the process.

Now… Imagine a creature that has a 2D perception of the world…. He can only move in 2 dimensions. Now let’s imagine that one of those 2D perceptive creatures live on one of those 4 faces that will rotate to the third dimension of our future cube… When we add his world (the square) to the rest of the cube path, our dear creature will see a huge augmentation in his world… Imagine, in an imperceptible laps of time he sees his world size multiplied by 5 (scientists would get crazy!)… But when we build the 3rd dimension by rotating the cube faces, the creature will see his world size become 5 times smaller and it will not be able to have a perception of the rest of the cube (scientists get crazy again and then they will start research and they will create some hyperspace theory advancing the 3rd dimension)… In fact this creature can’t build a cube… Because he lives in a 2D world (and a cube is, by definition in 3D)… so even if he gets to build a cube, he will only see a bi-dimensional section of it… and he will observe that the rest of the material used to build the cube just “disappears”…

With us it works the same… We have a 3d perception (for spacial dimensions, the 4th being time – or the ability to percept the changes of our space)… If we want to build a hypercube we need 4 spacial dimensions (plus time)… So we could build an hypercube but we won’t be able to see it in its totality… We only see a 3D projection of the hypercube…

What we see in this animation is a hypercube that remains still in our 3 dimensions and that is manipulated in the 4th dimension… To have that resultant movement, we have to be able to manipulate the 4th dimension of the hypercube, if we suppose that we reach to build a hypercube, we would be able to manipulate the 4th dimension… To do so, we need hyper dimensional engines… Really complicated (in my eyes…)

But what forbids us to inspire ourselves with the hypercube transformations to find new ways to move? Nearly nothing (except the physical constraints and costs)… To be honest I like the idea presented in Eggshell Robotic… And who knows, maybe sometime in the future we don’t play hyper-rubicube?

Permalink Leave a Comment

Back in Aix en Provence

24 February 2008 at 11:55 pm (Exhibition, Links, School of Art, Semaine thématique, Sound)

Here I am, back home again, in Aix en Provence, after a two week trip to the Netherlands. In a future post, I’ll be giving more details about the 12th edition of Sonic Acts. I’ll also present Yolande Harris, English artist in residence at the Montevideo institute in collaboration with the STEIM (STudio for Electro-Instrumental Music, located in Amsterdam), working with sound.

Coming week, in the Aix en Provence School of Art, we will have Sonotorium, 3 days with conferences about sound and art.

first result in google image for

The first day, February the 25th, Jean-Paul Ponthot, headmaster of the Aix en Provence School of Art, will present “Idéologie du bruit” (Ideology of noise). Then Bastien Gallet will held “Le son et ses dehors” (Sound and its outsides). Finally, Christina Kubisch programmed a sound projection.

The 26th, Alexandre Castant will start with “Le son, l’image et son double” (Sound, image and its double). Jerome Hansen will continue with ““Le problème d’image“ des arts sonores, une généalogie en trois zones de contacts” (The sound art “image problem“, a genealogy in three contact zones). We’ll finish the day with Kaffe Matthews‘ performance.

The last day, February the 27th, will start with “La forme comme traversée” (The shape as crossing) presented by Christophe Kihm. The next conference will be “Son et déraison” (Sound and unreason) by David Zerbib. To enclose the conference week, we will watch Philippe Franck’s selection of films:

Luc Ferrari face à sa tautologie, 2 jours avant la fin” (2006, 52min) by Guy-Marc Hinant and Dominique Lohlée;

“The movement of people working” (2003) by Phill Niblock.

Permalink Leave a Comment

22 February 2008 at 4:56 pm (Exhibition, Hypermedia, Links, Sound, Videos)

This morning was marked by two conferences and a performance in De Bali.

The first conference “The Diorama Revisited”, presented by Erkki Huhtamo, treated about Diorama and many “ama” ending words (like panorama, diaporama, futurama…) history.

You can find here videos from the performance “Digit”, done by Julien Maire, where Maire printes sentences passing his finger over white paper. He uses the words as lines to draw.

The third morning conference was a round table about yesterday’s drone performance. The participants were with Stephen O’Malley, Joachim Nordwall and CM von Hausswolff, moderated by Mike Harding.

The afternoon started with the conference “INTERACTIVITY AND IMMERSION” held by Jeffrey Shaw and Marnix de Nijs.

Jeffrey Shaw presented different technologies to produce images providing an immersion experience and the ways to interact with this devices. He mainly focus his conference around the iCinema center. He presented Cave immersion (projections on the wall, roof and floor) and cylinder immersion environment (the viewer is in the center of a cylinder, the images are projected on the cylinder wall’s) and spherical modular video cameras (cameras that films 360°).

Marnix de Ni presented some of his works:

Exercise in immersion is a 3D immersion experience game where the user wares a suit to travel inside a virtual world superimposed over the real space. The player is free to move around, interactivity is controlled by it’s movements.

Beijing accelerator is an interactive installation with a rotating video projection. The viewer sits on a rotating chair with a joystick (that controls the chair rotation). The objective is to syncronize the chair with the image.

Run motherfucker run is an interactive installation inviting the visitor to run within one of the 25 scenes mostly shot at night in the Rotterdam area. The device, a roller carpet, tends to slow you down by increasing running resistance. This piece is about adrenaline and the expirience of speed.

You can find this post http://www.sonicacts.com/wordpress/?p=109 too.

Permalink Leave a Comment

Sonic Acts opening night

22 February 2008 at 2:45 pm (Exhibition, Hypermedia, Links, Sound)

Today, February the 21st 2008, was the opening night for the 12th edition of the Sonic Acts Festival, in Amsterdam – Netherlands. The festival takes place in 4 different localizations: the Netherlands’ Media Art Institute, also known as Montevideo; the Melkweg, the Paradiso and the De Balie.

The night started at Montevideo, where we could enjoy the exhibition opening. In there, we can see the instalations from Ulf Langheinrich (Soil – 2005 – and OSC – 2006)

Julien Maire (Low Resolution Cinema – 2005 – and Exploding Camera – 2007)

Boris Debackere (probe)

and Kurt Hentschläger (Scape – 2007).

After that, in De Bali, we could watch Stan Brakhage‘s film Dog Star Man (1961-1964,73’00), in parallel we could experience the live performance done by the Drone People(Joachim Nordwall, Mika Vainio, Hildur Gudnadöttir, C. Spencer Yeh, Carl Michael von Hausswolff, Stephen O’Malley and BJ Nilsen). The live performance is a 4 hours succession of individual performances. No rules were defined except to be alone on the stage and to end like the beginning.

Post done for the festival, you can see it here

Permalink Leave a Comment

Sonicacts XII (2007) – Here I go

22 February 2008 at 8:56 am (Exhibition, Sound)

Coming weeks, I will be following as a volunteer blogger the 12th edition of the Sonicacts festival… I’ll be publishing posts here in my blog and in the festival’s site.

Sonicacts XII (2007)

Permalink Leave a Comment

Touching sound…

21 February 2008 at 12:06 pm (Hypermedia, Processing.org, Sound, Un-Usual Post, Wiimote)

With the development of technology, touch interface became fashion in today’s geek society.Jonny Lee (previously quoted in this post) developed his own multi touch device using a Wiimote, here is a video explaining it: Touching tools appear for all kind of things, video games, palmtops, mobile phones, DJ devices, remote controls, cash redraw screens, etc.This technology first appear for a single touching point, but now, we start to see multi touch technology – with more then a single point.Here is a video from a group called iBand using a Nintendo DS and two iPhones as instruments.

 

The idea is not new, other tools using touch, and multi touch interfaces exist, like Korg’s KaosPadmini kp and Kaossilator.  Groups of artists also worked with touch as an interface. It is the case for Ractable, the video speaks by itself, so check it out: 

Permalink Leave a Comment

Knowledge sharing

10 February 2008 at 5:41 pm (Un-Usual Post)

Today in Robert Hodgin’s blog – Flight404 – I read “Source code rumination“, starting a discussion about how and what to share.

In Martin Wisniowski’s – Digital Tool – I’ve read “Doing Research in the 21st Century“, his last post in his Research and theory part of his blog, presenting a point of view about changes in the way we do research in the 21st century.

And then I thought. Both post are linked (obvious), and I started to think about sharing, and how to do that in an intelligent way…

First of all, I’ll define what is to share : to give a portion of something to another or to others. That means you don’t need to give everything. It means you can give parts of it.

Research is the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.

(definitions from Oxford American Dictionary)

Sharing will allow people to do research, and by doing research one builds new facts and conclusion, that can be used in somebody’s else research and so on. But doing research doesn’t means that you try to find a complete solution or a perfect match. It means you’re trying to understand and build new knowledge.

So a smart way to share would be thinking on what someone could need in a research, and this is the hard part (the easiest would be to give everything in details or nothing at all). Sharing means more the simply giving, it means explaining and indicating the highlights of a problem.

For an art work it can be how and why it was made. For a program it can be some main points of it. The way someone shares his knowledge, and the way somebody else sees and process information are both personal.

There are no perfect sharing and perfect research. They are as many research as possibilities, that means an infinity. That doesn’t mean that something can be wrong or correct. I see it more like qubits where the result from a logic operation is both true and false. Quantum computers make this kind of operations, and I believe that our logic is going to change when (and if) quintic computers become common. Systems like Wikipedia are a slight approach of quantum logic, using a propositional logic system where people can add their propositions to give a definition and share their knowledge.

Erwin Schrödinger made an experiment with a cat in order to exemplify the concept of superposition in quantum mechanics. Here you’ll find an explanation about this experiment known as “Schrödinger’s cat”. Basicly when you put a cat in a box with a device that can kill it and you close the box, the cat is both potentially dead and alive at the same time. And when you open the box you’ll see only one possible result of the problem.

Permalink Leave a Comment

Gamerz 2.0

4 February 2008 at 9:13 pm (Circuit Bending Workshop, Exhibition, Games, Hypermedia, Mechatronics, Processing.org, Sound)

I didn’t had time to write about the Gamerz 2.0 exhibition, so here I am trying to fix this…

First, Antonin Fournaud and Manuel Braun’s Patch&KO, a mod of Street Fight II introducing a control device where you must loose control to be able to play. The device is basically an hybrid between a bean game, a Pachniko and a marble machine using iron balls in a pin field making electric contacts. Each contact may be transformed in an action (like hit, jump, etc.). Here is a video showing it in action:

Servovalve presented a “worm” version of Carbone: a software that copies an image (a face to be precise) in a random mode.

Damien Aspe built a real and colorful Tetris wall called From Russia with fun:

Guillaume Stagnaro presented a piece called XOX, two robots playing Tic-tac-toe programmed to never loose and never win. In this situation, the only way to win is not to play.

Grégoire Lauvin presented Weith Contest, a multiplayer music game where the gameplay is based on weight. The heaviest measure plays the sample.

Pierrick Thébault (from L16) made a cool hack from CyWorld making a porn version called CyPorn.

The night finished with a live musical performance by Confipop and Sidabitball using Game Boys as instrument to generate sounds and images.

More information about the works presented here and the ones I didn’t mention here.

Permalink Leave a Comment

Growing algorithm – the result of Sound/Hypermedia AOC

2 February 2008 at 10:52 pm (Hypermedia, Processing.org, School of Art, Sound, Wiimote)

Last week was my last AOC Sound/Hypermedia class, here is a version of our (un)finished group work done during this course… In this project I was with Marie Fontanel, Hong Seong Hye and Aurélie Loffroy

The idea comes from tree barks… Here are some starting images:

Designed by Marie FontanelDesigned by Marie Fontanel

Designed by Marie Fontanel

We first tried to reproduce the nodes shapes using some spiral functions.

You can see the animation here.
You can see the code source in two parts : first and second part.
Built with Processing

Then we decided to put drawing rules (inspired on Game of life) : we draw using the pixels[], if the position where the pixel is going to be drawn is occupied, then the pixel searches the fastest (and easiest) way to go around the occupied space, finaly if there is no solution to go around the occupied space, then the pixel chooses a random empty position. The code is still unfinished, I guess that later I’ll change it in order to add some new rules….

We integrated the Wiimote to use it as a rubber, when you shake the Wiimote the image gets erased. To connect the Wiimote with Processing, we use a software called OSCulator. For the sound part, we use Pure Data: Processing sends to Pure Data the pixels’ orientation (its angle) which modulates some sound samples (made thanks to Audacity). Both OSCulator and Pure Data use OSC protocol to communicate with Processing. Here you can find the OSC library.

You can download the complete set here.

Permalink Leave a Comment