scriptographer/paperjs. lissajous
I was recently asked by a friend about some line patterns we used on a project a number of years ago. What I've learned since then, is that these beautiful line patterns are known as Lissajous curves as made famous by Jules Antoine Lissajous (and Nathaniel Bowditch). Way back in 2005 we were generating these using the blend tool in illustrator. It wasn't the slowest process, but it wasn't the fastest either so we would save these variations in different files and reference later. Cut to seven years later (I feel old now) instead of digging through my harddrive for these saved files, I whipped up a little script to create these wonderous curves in scriptographer (and sure, why not with paperjs as well).

Lissajous curves are generated by a relatively simple formula

// x = Αsin(at + δ), y = Bsin(bt) // all angles are in radians var a = 3; // amplitude/frequency of x var b = 4: // amplitude/frequency of y var step = 1 * (Math.PI/180); // 1 degree converted to radians var delta = 60 * (Math.PI/180); // phase shift converted to radians for(var angle=0; angle<=(Math.PI*2 + step); angle+=step) { x = Math.sin( angle*a + delta ); y = Math.sin( angle*b ); }


There is a glitch that happens every now and then that causes the control handles of the beziér curves to shoot off of the screen. I'm looking into a better way for calculating beziér curves.

Try as I might and given my extreme weakness with entry level calculus I couldn't determine the error in the math. Instead I corrected the glitch with a simple function to check whether the control handles of the curve are within an acceptable boundary.


function boundsCheck(pt1, pt2, tolerance) { var brect = new Rectangle(
new Point(-(pt2.x+tolerance.x), -(pt2.y+tolerance.y)), new Point(pt2.x+tolerance.x, pt2.y+tolerance.y) ); return pt1.isInside(brect); };






Phase Shift
the period of the curve

Frequency
frequency ratio across the x & y axes

Stroke Weight
set two different stroke weights and the curve will interpolate from the start to the end weight

Colors
set two colors and the curve will interpolate from the start to the end color

Opacity
set two opacity values and the curve will interpolate from the start to the end opacity

Blend
blend mode of opacity: normal, multiply, screen, etc.


Flyout Menu:
Alternate
colors will alternate every other segement

Blend
colors will blend across segments from start to end


Mouse drag
modulate the frequency (using the input values as maximums), the closer to the origin of the document = 0, the further away the maximum

Mouse drag+shift
modulate the phase shift (mouse x axis) from 0 - 360

The paperjs version is here and works as described above, grab the scripts/source for scriptographer lissajous.js

Labels: ,

paperjs. frederickkPaper folio.js
Update: Over the course of the past few months, I've settled on the name folio.js, which is much better than frederickkPaper. The name change has been reflected in this post, however some of the usage methods mentioned in this post are inaccurate, please refer to the github page https://github.com/frederickk/folio.js for the most up-to-date information.

Over the past couple of years, I've assembled a library of functions for Scriptographer, and given the recent news, I began porting this rag-tag-collection into a slightly tighter library and framework for web development. I'm calling it folio.js, mainly because at the moment I'm focusing on paperjs, in the future I'd like to try and make it more generic for use with other web based creative tools (processingjs, et. al.). In addition to the Scriptographer specfic functions, I've also ported some of the more useful features from my Processing library Frederickk.

I've only tested within Chrome and Safari on the desktop and was pleasantly surprised to see that the following examples worked on ios (chrome and safari) as well. However, FrederickkPaper current state should be seen as an alpha development.

Current version can be downloaded from github here.


Usage
The library is loaded like any other library in HTML

<script type="text/javascript" src="path/to/FrederickkPaper.js"></script>

Once loaded the "folio.js" namespace is available, which simply means you can access the various functions within the library. An overview of these functions can be found in the read me on GitHub (this will change overtime as time allows for development)

// create a shortcut to the namespace if you wish var f = folio; // to access other features quickly // you can use additional shortcuts for namespaces var f3d = folio.F3D; var fio = folio.FIO; var ftime = folio.FTime; // and to access certain classes var fpoint = new f.FPoint(); var fcolor = new f.FColor(); var fdate = new ftime.FDate(); // and so on...


Examples
These are just some small examples, which implement folio.js. I aim to add more example usages of the library in the future. The interactive examples and the simple examples (in code) below can be viewed fullscreen here.


3 halbierte Würfel
inspired by Max Bill
mouse rotate in space
return start animaton
r reset to start position


As you can see F3D is a very rudimentary 3D class, mainly designed for creating very simple 3D geometry within a paperjs context. I would like to develop it to be more efficient, but I have no plans to implement anything beyond that. I believe the belies the inent of paperjs, plus there are better javascript libraries for creating 3D using WebGL. The above examples implements F3D as well as FShape.



F3D Example (code below)
mouse rotate in space


// the core folio.js namespace var f = folio; // the F3D namespace var f3d = f.F3D; // initiate the scene, this is how the transformations are // interpolated for 3D geometry to appear on the screen var scene = new f3d.FScene3D(); // 3D path var path3; // values for rotation var rotation = []; function Setup() { // required setup scene // (optional parameters) width, height, focalLength, display mode // 'ORTHO' is for isometric // 'PERSPECTIVE' is the default mode scene.setup(view.bounds.width, view.bounds.height, 1000, 'PERSPECTIVE'); // initiate the 3D path path3 = new f3d.FPath3(); // add 3D points to this path // FPoint3 takes 3 arguments (x, y, z) path3.add( new f3d.FPoint3(-100, -100, 100) ); path3.add( new f3d.FPoint3( 100, -100, 100) ); path3.add( new f3d.FPoint3( 100, -100, -100) ); path3.add( new f3d.FPoint3(-100, -100, -100) ); // use FColor to give the field a random RGB color path3.fillColor = new RgbColor().random(); path3.closed = true; // important! the path has to be added to the scene scene.addItem( path3 ); }; function Draw() { // rotates all of the items in the scene // TODO: be able to rotate individual items scene.rotateX( f.radians(rotation[0]) ); scene.rotateY( f.radians(rotation[1]) ); scene.rotateZ( f.radians(rotation[2]) ); // draw scene to screen // the scene contains all paths (only FPath3 items) added to the scene // the scene can have a scalar as well scene.draw().scale(1); }; function onMouseDrag(event) { rotation[0] = event.point.y; rotation[1] = event.point.x; rotation[2] = event.point.x - event.point.y; // redraw to update scene Draw(); };




Clocks
return start timer


FTime is a collection of classes for managing time based activities, whether it be getting the time and date, creating a timer (as in a stopwatch), or creating a counter to increment over the duration of a specified time (a stepper).



FTimeExample (code below)
return start timer


// the core folio.js namespace var f = folio; // the FTime namespace var ftime = f.FTime; // Date & Time var fdate; var dateText; // Stopwatch timer var fstopwatch; var stopwatchText; function Setup() { // initiate FDate fdate = new ftime.FDate(); // this is how we'll display the date dateText = new PointText( new Point(35, view.bounds.center.y) ); dateText.justification = 'left'; dateText.characterStyle = { font: 'american-typewriter', fontSize: 30, fillColor: 'red' }; // initiate FStopwatch fstopwatch = new ftime.FStopwatch(); // this is how we'll display stopwatch time stopwatchText = new PointText( new Point(view.bounds.width-35, view.bounds.center.y) ); stopwatchText.justification = 'right'; stopwatchText.characterStyle = { font: 'american-typewriter', fontSize: 30, fillColor: 'red' }; }; function Update(event) { // reinitiate FDate to keep the time up-to-date fdate = new ftime.FDate(); // a temp holder to convert our stopwatch time var temp = new ftime.FDate(); // show the time of our stopwatch stopwatchText.content = temp.get( fstopwatch.get() ); // redraw to update scene Draw(); }; function Draw() { // show the time dateText.content = fdate.now(); }; function onKeyDown(event) { // start/stop the stopwatch by pressing enter if(event.key == 'enter') { stopwatch.toggle(); } };



FStepper is the class for creating a value which increments over a specified timer period.


The Debate




FStepper Example (code below)
any key swap colors


// the core folio.js namespace var f = folio; // the FTime namespace var ftime = f.FTime; // Stepper for animations var move; var blend; var ball; // holder for ball's points var points = []; // holder for ball's colors var colors = []; var background; function Setup() { // initiate move FStepper move = new ftime.FStepper(); // set the time length to 3 seconds move.setSeconds( 3 ); // start point points[0] = view.bounds.topLeft; // end point points[1] = view.bounds.bottomRight; // initiate blend FStepper blend = new ftime.FStepper(); // set the time length to 9000 milliseconds (== 9 seconds) blend.setMillis( 9000 ); // swap colors colors[0] = new paper.RGBColor(95/255, 56/255, 102/255); colors[1] = new paper.RGBColor(241/255, 93/255, 94/255); // this is the ball we'll animate with the stepper ball = new Path.Circle( points[0], 60 ); ball.fillColor = new paper.RGBColor(1.0, 1.0, 1.0); // start the FSteppers move.toggle(); blend.toggle(); }; function Update(event) { // required to keep the stepper in sync with our script move.update( event.time ); blend.update( event.time ); // when the stepper has reach it's end within the // time alotted, it's done if( move.isDone() ) { // tell the stepper to start counting again move.toggle(); } // when the stepper has reach it's end within the // time alotted, it's done if( blend.isDone() ) { // tell the stepper to start counting again blend.toggle(); } // redraw to update scene Draw(); }; function Draw() { background.fillColor = colors[1]; // in order to animate the ball, we'll lerp from // point1 to point2 var pos = points[0].lerp( points[1], move.delta ); // move the ball in an arc pos.x *= Math.cos(move.delta); pos.y *= -Math.sin(move.delta)*2; ball.position = pos; ball.translate( 0, view.bounds.height ); // blend the colors from one to the other var col = colors[0].lerp( new paper.RGBColor(1.0, 1.0, 1.0), blend.delta ); ball.fillColor = col; }; function onKeyDown(event) { // when any key is pressed the colors swap temp = colors[0]; colors[0] = colors[1]; colors[1] = temp; };



FColor folio.js augments paperjs existing color class with additional functionality such as lerping colors, darkening/lightening colors, parsing hex colors, generating random colors, etc.


Chain-Chain-Chain
mouse & click move circles
mouse & click & shift resize circles



10 PRINT CHR$(205.5+RND(1)); (code below)
inspired by 10 PRINT CHR$(205.5+RND(1));
mouse generate new pattern
input fields change colors


// the core folio.js namespace var f = folio; var background; var group10; // our cross var cross; // size of pattern elements var size; // colors holer var colors = []; function Setup() { // scale the size to the width of the canvas size = (view.bounds.width/10)/2; // create our cross element // cross is an FShape element cross = new Path.FCross( view.bounds.center.x, view.bounds.center.y, size,size, size, 'SHARP' ); // hide the cross (because we'll be cloning it in Draw()) cross.visible = false; // initiate draw group group10 = new paper.Group(); // initiate background background = new paper.Path.Rectangle( view.bounds.topLeft, view.bounds.bottomRight ); }; function Draw() { // pull in color values from input fields // uses jquery to get values colors[0] = new RgbColor().hex( $("#hexcolor1").val() ); colors[1] = new RgbColor().hex( $("#hexcolor2").val() ); background.fillColor = colors[1]; background.bounds.size = view.bounds.size; // a bit of hack to clear the group when redrawing group10.removeChildren(); for(var y=0; y<view.bounds.height+size/2; y+=size*2) { for(var x=0; x<view.bounds.width+size/2; x+=size*2) { // generate a random integer between 0 and 2 var rand = f.randomInt(0,2); // grab one lines in the cross // at random of course var c = cross.children[ rand ].clone(); // darken colors[0] to use as a secondary color // a function added to paper.Color via folio.js (rand == 0) ? c.fillColor = colors[0].lighten(0.03) : c.fillColor = colors[0].darken(0.03); // position the line on the grid c.position = new paper.Point(x+size/2,y+size/2); // add to our group (which we clear each redraw) group10.appendTop(c); } } group10.appendBottom(background); }; function onMouseDown(event) { Draw(); };



Building & Compiling
Writing this "library" was an opportunity to delve into compiling multiple JavaScript files. For any typical dev, this is the most basic of learnings, but for me it was (not a revelation, but) helpful.



Here is a small collection of links about building, minifying, concatinating & documenting multiple JavaScript files



Template/Framework
In addition to the library, I've also created a template for use in creating paperjs projects. It's structured similar to that of Processing or OpenFrameworks. Also included within the template are some basic structures and functions for creating a basic webapp




Both the folio.js library and the template are packaged together on GitHub https://github.com/frederickk/folio.js


Reference
My knowledge of JavaScript is still pretty unsophisticated, but I have to admit, I'm intrigued about where the future might lead. With new JavaScript frameworks popping up every day it seems. In general I'm much happer with rendering and logic of paperjs in comparison to processingjs however, the choice of framework is very much dependent on intent.

If anyone else is looking to understand JavaScript a bit better, here's a smattering of links, that I've referenced quite a bit in creating folio.js. Maybe they'll be of interest to others.

Labels: ,

processing. pie pack illustrations
I've been working on a massive project at work lately and one tiny portion of it was creating visuals for analytics. In addition I've been exploring circle packing and rigid body physics engines (i.e. Box2d and Chipmunk) in Adobe Illustrator with scriptographer (ridiculous I know) and paper.js (more acceptable). Once I have more time I'll publish my results on these explorations.

In my pseudo-delirium and boredom of working with these info graphics, I wondered if illustrations made from circle packed pie charts would be remotely interesting. Turns out the answer is "Jein"



Since I was already exploring circle packing it seemed logical to pack the charts together. At first I began sketching in scriptographer, but it quickly became obvious I would need/want a little more power under the hood to generate the charts, however, now I have a nice little script to pack objects (circles) in illustrator (yes... ridiculous I know).




Image Reading
Using processing the sketch loads an image on start-up and maps brightness areas using a voronoi mesh which I conveniently borrowed from EVIL's SVG Stipple Generator. The size of voronoi areas match to the brightness (0 smaller, 255 larger) of the area in the image over which it lies, which is then mapped to the size of the created pie chart as well as the number of pieces to show in the chart.

Charts
The charts are pretty straight forward in their construction. Size as mentioned above is based on brightness value and mapped to a minimum and maximum size as set from the GUI, the number of pieces is determined by brightness and mapped to a max number of pieces count, hole and hole size can also be controlled via the GUI.




Sprinkle in some methods to output to PDF and TIFF and presto chango. After playing with the app for a bit, I was thinking "ok... now what?" PDF's and TIFF's are nice on screens, but... what if I could screen print (or whatever) some of these. I would need to limit the number of colors used, posterize. boom. done. right? Yes, but boring, so I whipped up a threaded class that recolors the source image based on the colors from another image, so now I can limit colors to the number I choose as well as choose the exact colors I want. And then for shits and giggles I added Kuler support to pull in color palettes.








This was a nice short little experiment that I'm happy with. It was satisfying to sketch out the idea and follow through with very little resistance along the way (as opposed to some of my recent forays into physical computing).

Code
This is the class for recoloring images, it's threaded so that when used in conjunction with the rest of an app it won't interfere with the framerate.


And here's the class which wraps some direct opengl calls for drawing the pie (charts), I've noticed you can get upwards of 500 or so charts before the app starts to get bogged down.


Code for the voronoi brightness mapping you can get that from the StippleGen source code StippleGen.zip. Lastly the code for circle packing was modified from Iain Maxwell's cell packing code. There are some odd behaviours of the code as you can see in the above video, i.e. when lots of circles are added they begin to overlap, at some point I will investigate further.

For the Kuler support, I've updaded my frederickk processing library with an FKuler class (which is a cleaner implementation of what I used here).

The most recent build of the library can be downloaded here

Labels: ,

Scriptographer. Create Grid
often i find myself needing to generate a grid in illustrator and everytime i have to do it by hand. now I realize the the ridiculousness of this need, as if you need a "grid" in illustrator you should probably use indesign, however, in spite of that i whipped this script up.



features:
1/ set margins
2/ rows and columns are set based on margins
3/ check "Remove Existing Margins/Grid" in order to clear previously created grids
4/ set ruler unit mm, inch, pica, point/pixel (via flyout menu in top right)

*update*
5/ create grids on all artboards at once


i have only tested on a mac using illustrator cs5.1. if you find a huge bug (which is likely) let me know. grab the script CreateGrid_0_0.js

i also learned a little trick to update palette components in scriptographer on the fly, by using "getComponent" here's an example:

var values = {
myValue: 3,
};


var components = {
ChangeToMm: {
type: 'button',
value: 'millimeter',
fullSize: true,
onClick: function() {
updatePalette('millimeter');
}
},
ChangeToInch: {
type: 'button',
value: 'inch',
fullSize: true,
onClick: function() {
updatePalette('inch');
}
},


myValue: {
type: 'number',
label: 'My Value',
units: 'millimeter',
}
};


var palette = new Palette('My Palette', components, values);


function updatePalette(theUnit) {
palette.getComponent('myValue').units = theUnit;
}



Labels:

Processing + Arduino. LightBrush
i've been working on this project off and on (a bit more off than on) since early 2009, and most recently i've given my self a deadline of wrapping it up by the end of 2011. from the looks of it, the conclusion however as it stands will be incomplete. i'm posting it here for posterity sake as well as in hope that maybe some of the successes and failures i've had may be of interest to others.

the idea
the crux of the project was to create light drawings in the air, but using light as a medium to recreate images, displayed on screen, in the air. a device with lights on it would somehow have x and y coordinates based on it's position in the air and correlate to x and y coordinates on the screen.



the beginning
i whipped an easy pixel grabber program, which used a black & white image and sent the location's pixel value (0 or 255) to a very crude prototype using one blue led. the single color led turns on or off based on the brightness of the pixel on screen. the biggest challenge was getting coordinates of the device in space. i ended up using a wii mote (via OSC) and an IR led to tell the computer the x and y coordinates of the device.






the initial tests with the prototype were promising but it was obvious i needed to tweak it (i.e. larger light source) in order to achieve the resolution i was hoping for. i knew i would need more LEDs and contemplated a stick or maybe an led matrix (multiples of either or a combination of the two). subtle pitch and yaw movements of the "brush" caused unwanted distortion so i used plexi glass as a surface to press on and the images got better, but lacked the "magic" i was hoping for.

everyone else
then i put the whole project on hold as my wife and i uprooted and relocated to europe, and in the meantime the inevitable happened and others developed the same idea (zeitgeist?). the first being by berg for dentsu london using an ipad to draw 3D type in the air.



and i should also mention timo arnall's other work with light painting.



although very close to my idea, no one had done it exactly as i pictured it in my head. there was still hope, yet i still was busy with other things and didn't have the time to invest in pursuing the project any further. by now it was near the end of 2010, when i stumbled across the light scythe, a very elegant execution which uses a large stick clad with RGB led's to create stunning images in the air. *drat* exactly what i wanted.


light scythe

onwards and upwards
i thought of just giving up and maybe i should have, but i decided to keep trudging on, because now i knew i wanted to create a more handheld version, which could be used untethered to the computer and maybe even together with other similar devices. which lead me to where i am now.

i began a couple months ago by focusing on using an 8x8 RGB matrix and a colorduino (shield), in wanting to make it completely mobile i started by putting together an iphone app and interface using openFrameworks and it was going well, i hit a couple of snags programming in objC. those problems aside, i had an app that would work theoretically, but in practice did not, because (so far as i know) communicating with external hardware devices from an iphone requires additional licenses and permissions from apple. so in my quest to find an alternative method of communicating to the arduino from an iphone, i found arms22's soft modem, i've included the parts needed to build one for yourself, but i quickly found out that it's baud rate of 1225bps would be way too low for sending 64*3 bytes (64 leds, and 3 bytes per color) at a speed that would be responsive enough for light drawing.



soft modem hardware
2 0.47µf electrolytic capacitor (blue can)
1 47pF tantalum capacitor (yellow)
2 100k resistor
3 10k resistor
1 4.7k resistor
1 3.5mm 4 pole phono input

that was a hard pill to swallow, and i was still hopeful that i could some how pull of a completely mobile computer-less version of my light brush. i had created an overly complicated series of sending a wifi signal from the phone to my laptop (via OSC) and then converting the OSC signal to serial and sending it to the arduino. luckily kevin cannon snapped me back to reality, and suggested that if i want to go mobile that interfacing with an android device would be much easier. if you've read this far, you know the dream of making what i envision is fading, but my stubborn determination kept me moving forward.

the result
abandoning all hopes of using the iphone, i switched to processing since i can prototype much quicker with it. and in the switch i abandoned the wiimote, as communicating with it via OSC was slow and cumbersome. once again kevin suggested just using the webcam and an IR filter. since apurely computer-less operation is on ice, i created a custom case which houses the arduino, colorduino (shield), 8x8 led matrix and 4 IR leds.



light brush hardware
4 IR Leds
8x8 rgb matrix
arduino
colorduino (shield)

after hours of coding, i finally had a desktop application that allowed me test my light brush. i can send 8 bits of RGB data at once (i.e. an image or parts of an image) and track the device in real time. i recruited my friend ben to help me test the hardware and software. we set up everything and set about drawing pictures in the air...



...but it didn't work at all, like i/we had thought/hoped it would. as we tried different settings and code optimizations, it became obvious why it wouldn't work. we learned you can only draw by moving the device in one direction (which is how the light scythe and others work) and also, the 8x8 led matrix at 60mm (2.3in) is most likely too small. below you can see the results of us trying to draw a simple "2" and that it didn't work. although my experiment with light drawing has ended in failure, i've upload large chunks of code that i've written for this, such as sending massive amounts of data over serial to arduino and a tracking class, maybe somebody out there will find them useful.


i failed to draw this "2" in the air, and learned that drawing images works better when the image size corresponds with the matrix size








as with previous projects, i'm really happy with what i've learned but i'm disappointed that i wasn't able to get it to work as i had hoped. maybe i'll let it sit for another two years and come back to it.

code
this is the class i created for tracking blobs (i.e. IR leds) and averaging their coordinates for smooth operation, (i wrapped it in a thread so that the overall frame rate of the main app isn't affected)



here's the class i wrote for sending an image via serial to arduino (also wrapped in a thread)



this is the corresponding code for the arudino (using a colorduino) which also needs my modified version of the colorduino library

Labels: , , , ,

Drop Processing
Drop Processing is an app i put together a while ago for quickly batch processing images (using adobe photoshop). often when using images i would need them to be all cmyk or all rgb and constantly re-doing the same "batch processing" actions in photoshop or having to create a multitude of drag and drop applets.

Drop Processing utilizes applescript automation in order to process images. it's meant to be used as a drag-n-drop app by selecting a group of images and dragging them to the app icon. i like to keep the app in the sidebar of finder windows for quick access.





max. dimension
images can be resized by the longest dimension 0 = do not resize
(i.e. if you enter "50" an 800 x 600 would be sized down to 50 x 38)

resolution
resolution can be changed to: 72, 96, 150, 300, or "no change"
(i purposely made this a drop-down menu, as i commonly only need these values)

color space
the usual suspects: cmyk, rgb, grayscale, and "no change"

flatten image
if image is multi-layered, .tif, .psd, etc. it will be flattened to one layer

save as copy
this will append "Copy of" to the front of processed images

set preferences
sets the settings, settings are saved from session to session

as you can see in the video, it's pretty easy to use and it supports all photoshop compatible file types. there are two caveats, all dropped ".gif" files will be converted to ".psd" files and all dropped ".jpeg" files will be converted to ".jpg" files. as of right now it does not support dropping of folders.

download Drop Processing, if anyone is interested in the source, let me know. however i wrote the app using applescript studio, which is no longer supported by apple.

i've tested this app using a osx 10.5 - 10.6.8 and Photoshop CS2 - CS5. i cannot be certain if it will work in lion (my guess is, it will not)

Labels: ,

processing. triangulation + meshes
i've been slowly experimenting with meshes (mostly with triangulation algorithms), or better said trying to understand how to better control them. it started out innocent enough and has slowly progressed. after deconstructing jonathan puckey's delaunay raster scriptographer work, i created my own version and began modifying it. i expanded upon my own version by creating a version which maps brightness values of an area to line density (instead of colors).

karl pilkington, triangulated

although proud of my achievement, the entire thing is something that jonathan is exploring, but honestly without a lot of conceptual thought i kept trudging aimlessly on. i thought, well i'll take this idea to live and moving images (recently the internet has proved to me that everything has been done, with this and this) my explorations with my own puckey-delaunay-raster script proved the best illustrations come from getting in the details of the face, so when i switched to processing for moving images i created a rudimentary edge detection class which pulls out the edges as PVector arraylist.




the first attempt was terribly slow, but worked more or less. i contemplated and subsequently failed to port it over to openFrameworks (i couldn't get the edge detection to work). i realized even if the concept is nothing new i could at least learn more about optimization and harnessing opengl VBOs (vertex buffer objects). and eventually i developed a version which takes web cam input, edge detects, and triangulates all the while maintaining a frame rate of 30fps+. a z depth (height map) is generated from the brightness of the edge-detected-pixel, and by adding a multiplier an interesting "face landscape" occurs (especially when using the smooth gradient mode).



pgl.beginGL();

if(triangles.size() != 0) {
if(mode == 0 || mode == 1) {
gl.glBegin(GL.GL_TRIANGLES);

float x1,y1,z1, x2,y2,z2, x3,y3,z3;
float px1,py1,pz1, px2,py2,pz2, px3,py3,pz3;;
float r,g,b,a, r2,g2,b2;

if(mode == 0) {
/**
* smooth gradients
*/
for(int i=0; i<TriangleVBOPoints.length; i+=3) {
x1 = TriangleVBOPoints[i];
y1 = TriangleVBOPoints[i+1];
z1 = TriangleVBOPoints[i+2];

int c1 = img.pixels[ ((int)y1*capture.width) + ((int)x1) ];
r = norm((c1 >> 16) & 0xff, 0,255);
g = norm((c1 >> 8) & 0xff, 0,255);
b = norm(c1 & 0xff, 0,255);
if(bOpaque) a = 1.0;
else a = norm(FTools.luminance(c1), 0,255);

gl.glColor4f(r,g,b, a);
gl.glVertex3f(x1,y1,z1*zHeight);
}

} else if(mode == 1) {
/**
* flat colors
*/
for(int i=0; i<TriangleVBOPoints.length; i+=9) {
x1 = TriangleVBOPoints[i];
y1 = TriangleVBOPoints[i+1];
z1 = TriangleVBOPoints[i+2];

int c1 = img.pixels[ ((int)y1*capture.width) + ((int)x1) ];
r = norm((c1 >> 16) & 0xff, 0,255);
g = norm((c1 >> 8) & 0xff, 0,255);
b = norm(c1 & 0xff, 0,255);
if(bOpaque) a = 1.0;
else a = norm(FTools.luminance(c1), 0,255);

gl.glColor4f(r,g,b, a);
gl.glVertex3f(x1, y1, z1*zHeight);

x2 = TriangleVBOPoints[i+3];
y2 = TriangleVBOPoints[i+4];
z2 = TriangleVBOPoints[i+5];
gl.glVertex3f(x2, y2, z2*zHeight);


x3 = TriangleVBOPoints[i+6];
y3 = TriangleVBOPoints[i+7];
z3 = TriangleVBOPoints[i+8];
gl.glVertex3f(x3, y3, z3*zHeight);

}

}

gl.glEnd();

}

pgl.endGL();


i really liked this valley effect when increasing the z depth, and i thought it would be cool if i could start to distort the image by "folding" and "crumpling" it. similar to the pop faces work by joshua scott.



then i began thinking it would be even better to do this directly in illustrator using scriptographer after porting my edge detection code over to javascript, and creating the necessary 3d infrastructure i ran into challenges distorting the image although i did manage to create a neat kaleidoscope effect, which wasn't exactly what i was going for (although, that could lead to something)



i've pretty much abandoned the idea of achieving what i want in illustrator and have gone back to processing. using particles and physics i can distort a blank mesh of lines, however, i cannot deform an image. i can get the image mapped as a texture to a mesh, but as i distort the mesh, the image gets clipped (as if in a mask). this probably stems from me calculating the texture coordinates incorrectly.

all in all, i'm happy with what i've learned especially better understanding direct opengl calls and optimization, but i'm frustrated that i've hit this stumbling block. i think i'll probably put this idea on ice and focus on some other things (that require less programming) and maybe loop back around in the future.

Labels: ,

scriptographer. lorem ipsum


I've completely overhauled Lorem Ipsum Generator, which can be downloaded at http://scriptographer.org/scripts/general-scripts/lorem-ipsum-generator/ and cleaned up some minor bugs, specifically the way punctuation is "sprinkled" throughout the text, as well as added some additional features:

Dummy Text can now be generated in different Styles


I decided to ratchet up the ridiculousness and include a way of generating "mock philosophy" by implementing the Kant Generator python script and "Context-Free Grammar" XML files, as implemented in Nodebox, whose source code can be found here. In order to activate it download, visit the scriptographer page, download "kant.zip" and unzip it's contents in the same folder as "LoremIpsum_0_2.js", the script will recognize the folder and reveal options for:

  • Kant (generates several paragraphs of Kantian philosophy)
  • Husserl (generates several paragraphs of Husserl)
  • Thanks (generates a thank you note)

How I did this is a bit backwards, as my attempts to transpose the original python code to Java or JavaScript failed miserably. So instead I harnessed Java's  Runtime Class and getRuntime() in order to execute the python script and grab the results. (note: I believe the way I've implemented this will only work on OSX. I would be curious to see what happens on a Windows machine).

Here's how I implemented getRuntime() in the code, this could theoretically be used to execute any terminal script, which opens up some future possibilities.

function ExecPython(pythonCmd, delimitter) { var process = java.lang.Runtime.getRuntime().exec( 'python ' + pythonCmd ); var pythonResult = ''; var pythonResultArr = new Array(); if(delimitter == '\r') delimitter = '\r\r'; if(pythonCmd != '') { var reader = java.io.BufferedReader( java.io.InputStreamReader(process.getInputStream()) ); while ( (line = reader.readLine()) != null ) { var m = String.split(line, delimitter); for(var i=0; i<m.length; i++) { pythonResultArr.push(pythonResultArr, m[i]); } pythonResult += delimitter + trim(line); } } else { console.log('Not a valid python command/script'); } return trim(pythonResult); }


LoremIpsum_0_2.js will generate an unlimited amount of "lorem ipsum" text, in a variety of styles (see above), includes all of the base features (below) from v.0.0, in additiona to this it can create an unlimited amount of "mock philosophy" when the "kant" add-on is installed


It happens often enough that I need dummy-text in illustrator. unfortunately this capability has yet to be (if ever) implemented. typically I use lipsum.com or generate the text in InDesign or keep a blank text file with "lorem ipsum" text in it.



loremIpsum_0_0.js will generate an unlimited amount of "lorem ipsum" text, either number of paragraphs or number of words. download LoremIpsum_0_2.js

based features;

1/ if nothing is selected a text box, with the desired amount of text, will be created in the center of the page.
2/ type items can be selected and either replace or add to existing text. in both cases type attributes such as size, leading, and color are maintained. this video shows how the script functions.
3/ punctuation inclusion can be toggled as well as Title Case for headline creation

so far as i have tested, the script runs in cs3 and cs5 without any errors. if you find a bug (which is likely) let me know. grab the script loremIpsum_0_2.js

Labels:

processing. prettycolors
the amount of time between posts in the last year has been too much. i'm going to try and blog more regularly, instead of just blogging my final projects i'm going to start posting little snippets, as well as successes and failures in between.

i present to you a class for pulling colors from the tumblr blog http://prettycolors.tumblr.com/. i like the simplicity of the blog and the idea of color palettes being dynamic and changing over time (that and sites like kuler and colour lovers are covered)



this class grabs colors from the site and i built in rudimentary (and not yet complete) palette creation options. palettes are groups of five colors, which can be accessed getPalette(int type, int num). the default is to load the first 50 colors (max as allowed by the tumblr api), but this can be altered by changing the value of numToReturn.

the types of palettes available and created upon instantiation of the class (for faster access) are as follows:

ORDER palettes based on posting order, most recent colors appear first
BRIGHTNESS palettes based on brightness* (default creates 10 palettes)
COMPLEMENT palettes of complementary* colors (default creates 10 palettes)
RANDOM palettes of random colors (default uses the 50 most recent colors)
RANDOM_ALL palettes of random colors (uses all posted colors)

* implementation is very very very rudimentary, if not downright wrong



in addition to creating palettes, palettes can also be "exported" as .png files for later use (for example using my FPalette class as part of my incomplete frederickk processing library)



below is an example that uses the Prettycolors.pde class to create the above application. in this example palettes can be "exported" as .png files by clicking the mouse, palettes can be refreshed by pressing the spacebar.



* as with most of the things found on this blog, the code works for me (osx 10.6.7, processing 1.5.1) but i make no guarantees that it will work for you. however, feel free to update and hack anything useful out it. just let me know and please keep a link to this blog or my github repositories

Labels:

scriptographer. push/pull points
it's been a long time since my last update, and i've got a couple things on the horizon as soon as work lets up a little bit. one thing i've been updating quite a bit recently is my processing library frederickk

however, here's a quick thing i whipped up for scriptographer. i was wanting to easily affect meshes i've been building in illustrator. so here's a pair of scripts that "push and pull" points of selected objects around the artboard. somewhat similar to the 'warp tool', but more predictable in my opinion.

pointsPushPullTool_0_0.js uses the scriptographer pen-tool, when the mouse nears a certain it pushes it around. mouse proximity can be set within the palette, acceleration is the distance an object's points move.






pointsPushPull_0_0.js set the 'distance threshhold' as a way to control how close points should be to each other to be moved together. connect nearby points by setting the random value to 0, and by changing the random value to something above 0 the points will randomly move that distance.






i haven't fully tested these scripts 100%, so they may still be buggy. they seem too insignificant to warrant a release on the scriptographer site.



grab the scripts/source pointsPushPullTool_0_0.js and pointsPushPull_0_0.js

Labels:

schhplttlr
for the past couple of months i've been working on a project called "schhplttlr: electric beats" with daniel kluge and eugen kern-emden presented by peer to space and in collaboration with the department of cultural affairs munich.

the premise
Using movements inspired from “schuhplattler” (traditional Bavarian folk dance). The dancers generate beats and light via sensors. Musicians, media artists, dancers and choreographers experiment with these movements during a three-day long interdisciplinary workshop, in which the audience may freely participate. The results were performed at the opening evening on July 15, 2010.


this documentary film and selected photographs will be projected in MaximiliansForum, munich from 19 july 2010 until 5 september 2010.

the process (slightly abridged)
we began this project at the beginning of may, wanting to create something that could only happen in munich but also connect our collective interests of music, physical computing, performance, textile electronics, etc. we stumbled upon the idea of schuhplattler but we weren't sure yet how to represent it. literally? artsy-fartsy? projected (generative) images? physically? schuhplattler is a dance of courtship. it all started coming together, female and male. light bulbs flitting like fireflies in the spring, feminine, and harmonic. fluorescent lights structural, rhythm, and manly.

there were three main tools we used, max/msp for receiving and processing signals from the sensors and controlling the sounds. these signals were then sent out as open sound control (osc) messages to processing. which converted the messages into a signal for arduino which via relays turnd the lights (incandescent bulbs and fluorescent tubes) on and off.



we began prototyping. daniel focused on sensor creation and sound control. the gloves and other sensors were made using conductive foam. the wearer of the sensor presses the foam to complete the circuit. these values were then fed into max/msp where daniel weaved his magic, triggering sounds and controlling sample. eugen focused on programming reactive projected graphics (later abandoned). my task was to simply turn on light bulbs with arduino using processing.



by the time july rolled around we had successfully built a small prototype. eight light bulbs divided into two groups, four fluorescent bulbs, with working glove and thigh sensors. one small hurdle we had yet to solve 100% was the communication between max/msp and processing. as i said before we chose to osc as the communication medium using the oscp5 library. this was fine, however the way one builds a proper osc message in max/msp is (still) elusive to us. in the end we just put all of the necessary info within the message (control) field. on top of that when we did send messages there was a slight delay. we solved the delay by connecting both of our computers directly with an ethernet cable. we determined the delay was because we were connected via wifi.

we had still yet to test with the dancers or even move into the space, none of this happened until the week of the performance. as you can see MaximiliansForum is big and empty. but now we at least knew how everything would be connected it was now just a matter of taking the prototype and multiplying everything to the final quantities.











you can see more videos of our process at http://www.danielkluge.com/XX2010/schhplttlr.htm

the hardware
we kept things fairly simple on the hardware side to control all of the lights. we used a single arduino duemilanove and 12 modules built of the following parts.

2N3904 transistor
100V 1A 1N4002 diode
G6B–1114 5VDC relay
10A Fuse* (didn't actually implement the fuse)






as usual i have to thank my pops for his help with the electronic stuff. initially i soldered all of the components as the relays were jumping out of the breadboard sockets, because of the fast switching i assume. however, this proved disastrous and in the midnight hour i de-soldered everything and used breadboards instead.

as for the software on the arduino, one thing that had perplexed me was how to send an array of data to the arduino from processing. for the initial prototyping i was simply sending (per light group) a character for "on" (i.e. "Q") and then sending another character for "off" (i.e. "Z"), multiply this by 12 and you can see how ridiculous it is. i needed something that wasn't based on arbitrary key characters.

since arduino is it's own little computer it's internal timing isn't in sync with the signal sending computer (my macbook), this is why you have to use a starter marker. arduino waits for the maker buff[0] = Serial.read(); and as soon as it sees it, i can then tell arduino that everything following the marker is what it needs to use. i fill the rest of the buffer with these values for(int i=1; i<13; i++) buff[i] = Serial.read(); in this case, 12 integers "0" or "1" which tell the output to be on or off.

here's a simple example of the arduino code we used to control the lights.

int PIN_LICHT[3] = { 2,3,4 };

//buffer should be long enough to hold the desired values
//as well as the starter marker
//i.e. L000
byte buff[4];

void setup() {
Serial.begin(9600);
for(int i=0; i<3; i++) pinMode(PIN_LICHT[i], OUTPUT);
}

void loop() {
while( Serial.available() >= 4 ) {
switch( byte(Serial.read()) ) {

case 'L': //starter marker
buff[0] = Serial.read();
for(int i=1; i<4; i++) buff[i] = Serial.read();

break;
}
}

//outputs
for(int i=0; i<4; i++) {
if(buff[i] == 0) digitalWrite( PIN_LICHT[i], LOW );
else if(buff[i] == 1) digitalWrite( PIN_LICHT[i], HIGH );
}

}



the software
the light bulbs were controlled by rosanna's glove, each finger could control different pre-programmed patterns. the patterns controlled



to make the bulbs flicker on and off as desired was a bit complicated. we had originally thought that each finger could control a group but this proved to be rather boring. so eugen programmed a rather ingenious pattern system. the patterns controlled just how much "color" (i.e. groupings of bulbs) could be on. this allowed us to control the build up of the performance. these patterns could be cycled through by the glove.

the fluorescent lights were much easier to control one hit equals one light, regardless of which dancer sends the signal. the only exception was one of the dancers had a sensor on his shoe, when this was activated all of the fluorescents came on at once.

we created an override feature for all of the lights incase during the performance we needed/wanted to turn certain groupings off


result
the night of the performance went well, there's one glitch that still bothers me. the relays for the fluorescent lights will sometimes not switch off, keeping that group of fluorescents on. eugen and i tried for hours to solve the problem going one by one and replacing wires, relays, etc. i still have no idea what the problem could be. the only solution we found was to pop out the offending relay and put it back in. i had to do this once during the perfomance. reception and feedback has been great. this was a great project and we all learned a lot from it.

in the near future i will post the code (processing and arduino) we used from the event. so i've started a google project for the source http://code.google.com/p/schhplttlr/ in the meantime if you're interested in the code just send me an email.

also throughout the project we continually updated my processing interface very soon i will update the archive on the google code page to reflect these changes which will include Timer and MultiTimer functions.

Labels: ,