Time expansion and contraction;

I got this notion, that time is a continuing present. I’ll try to explain the picture I got in my mind. Imaging a painting – where like a fractal as deeper as you get in some placing in the fractal the more complex, and the more you can zoom in…

I see this in every sense we have as humans.

Visual: You are being found in a certain place, yet you can take a telescope or microscope and view inwards or outwards.

Audio: A sound wave is being transmitted in air… yet, you can always (again) zoom inwards (hear faster) then the sound expands or zoom outwards (hear slower) then things seems to sound shorter.

Sense of time: when talking about time, it is only speculations cause no human, has become a supper-human (as far as I know) and expand his consciousness out of the four dimensions we live in… When being in the present it is a part of a future picture we got in our minds, and a past (personal and cultural) history that we remember… and given moment from the present is a creation of the mind… Yet, if we somehow take our mind backwards – we can experience a rebirth of childhood, thus making the next moment a brand new one. This image is somewhat what I am trying to show (taken from the Minkowski space definition);

What made me wonder, about what happens in the mind, as I said before (I think in this past blog) any plot creates links in our mind, saying that, and following the sense of time paragraph above… When something become from past to future – when is our mind grasp the notion read or viewed (and become a part of the mind), or moreover, when is the moment we believe something… Of course believing is very subjective, there are people who believe their eyes, ears, and body – and on the other end — there are those who casts doubt on all their senses.

MIDI audio software

I managed to install my old MIDI connector to my Win8.1 machine. Unfortunately the old drivers do not support very well this legacy hardware, so there are problems such as disconnection from the MIDI device… it enables some sort of use for a short while.

Anyhow, wanted to note, this wiki Comparison of MIDI editors and sequencers. The vast amount of software that is available any given time to solve or use is great.

Found there this Linux MultiMedia Studio which got a cross platform (windows/linux) support. It is a very easy to use, and multifunctional application. Composing audio, and using that with an attached MIDI device, have never been easier.

I am not an expert in audio composing, yet the ease of creating, for example, a sample audio file — and then pitch it and create a full lengthy composting from it is tempting to give that a try.

Playing Stories

I believe that any scene (story) is a reality..
Whether it is played on reality itself,
whether it is in a show or a movie,
whether it is playing in a computer game,
whether it is virtually preformed as text or movie,
whether it is written as text in a book.

Thing is what do the actors think?, do they got an audience? (or in case there is none, do they know they haven’t got an audience?) [If a tree…], what are the feelings they endorse in the scene itself…? How does the scene effects them after they finish playing them? how does a scene effects its viewers/reader/players?

Can a person entail within him in the same time/space, multiple feelings (or multiple personalities, where the same time/space can be interpretate in totally different ways — by one act at a time — and later can be viewed only in these ways (one by one) that were meant to produce that scene).

On one of the podcasts I am listening to, they presented people that have extraordinary memory (in a condition called – Hyperthymesia – highly superior autobiographical memory) and can tell any little detail that have occurred in their lifetime. The focus of the podcast was a person that could tell any detail about a fiction basketball team that he has invented (imagined), where he preformed (imaginary) that team history… (Due to the fact that he got hyperthymesia) He could explain each player full history of childhood, and tales… Yet, all these were played only in his memory. But they were real as any other team… The details he gave were consistence, and he would tell the same exact details even after a couple of years, when being asked..

The latter paragraph example is real as any other performance… Although it is just an invention, and is only in his memory. who can say what is real or not?

If I were to say that a future being could project his future back to the past…? Is that real? What is the difference between these realities and any other….?

I recently thought about an idea that would rate each person and news clip… By a collection of people… Yet, even if a huge collection of people would rate something, does it make the rate valid? It would only create a certain index. The attitude toward this index is in the eye of the beholder… Like any other science or story told.

Another idea comes to mind, do a certain attitude towards something, hold any mass? Except being virtual and a feeling toward an event.

Blured reality

Film and movies has progressed, visually, a lot in the past years. Although I kind of stopped watching movies the past decade, I can imaging these changes watching parts of intros from different movies. The making of these movies, has progressed and is progressing a lot as you read these lines.

I am categorizing roughly the types of films as follow:

  1. 2d Animation – At first there was total 2d animation movies (pictures were drawn by hand — frame by frame).
    Examples like the Early Disney classics comes to mind.
  2. Stop motion 3d movies – Made with live characters or animated puppets.
  3. Live actors – Then the there live motion pictures were presented (live human or animal actors created a scene). Won’t exclude there the films made with added layers of drawn or moved puppets.
    Examples of such movies, are wide spread, and not hard to find (yet).
  4. 3d animated movies – Then there was a period of 3d animated picture (3d movies, which were totally created in a virtual world, and animated frame by frame, or using an automatic procedure the software that they were made with enabled).
    Examples such as this can be found from the early Pixar movies.
  5. Now a days – there are commonly pictures that are motion captured facial or full body movement by dressing actors with sensors enabling the 3d objects/characters on the scene to move. This process of course, made much of the work of 3d movies  (mentioned on point 3) redundant. Yet, this combination is hard to define, cause it isn’t exactly live actors nor 3d animated movies — it is the combination between these two that matters. So the question raised on this matter, is whether to consider the actors as actors (if they really had all sensors and dubbed the character).
    Examples of such movies can be taken from Peter Jackson‘s “Lord of the Rings” saga, or from “Avatar“, and many more who uses this technique.

I say no matter what technique you use to create a movie (or how do you define the actors or making), if the result is acceptable… Then so be it.

As I write these lines I raise the following questions:

  1. Thing I thought, is if the back bone of the 3d movie is made good, it is very simple changing one image with another like changing an HTML page image.
  2. What will be the future of these movies technique, is yet to be invented (or in stages toward a new invention).
  3. Would there be an reinvention of the movies, or would it all be interactive computer advance game. In which both motion and plot is flexible for changes.

Circular actions

I started thinking about; how people are running in circles in such way that when they reach the top of the circle they tend to forget the originated point opposite to where they are in the circle (due to lack of resources, or bad memory – for this or that reason), and then their actions tend to lead them back to that opposite point.

It doesn’t matter which resources you would use, the nature is commonly built this way. People who are being alienated to their behavior, tend to be disappointed, or get mad due to this fact.

Then after thinking about this definition, I continued thinking more about it. And came with this definition.

What if, any action will open ALL POSSIBILITIES to you. And without taking out of the equation the starting point – or any points that lead you to the current place – enabling you every moment to get to any of the points in this n graph points you virtually created.

It is easy explaining and picturing that when talking dimentions 2nd, or 3rd, yet 4th (including the element of time), or 5th,  one can go further to the N Dimention as well — would result an extraordinary picture.

Nonetheless, when talking about systems that are based on resources. Whether the resources ;

  1. are physical – such as within element of buildings or gadgets,
  2. or knowledge — whether you would have to dig deep enough in order to find out a solution for a given unknown element, or formula)

– on these occasions… it is a hard task. Due to the simple fact that not always you have all these resources – physical or knowledgeable.

Defining a system that will solve the solution to a user of that sort of system, could be great. How many of us know exactly how a daily used device works. Or how a certain system works. Most of the time, an expertise in one area or another would consume a lifetime of learning, and experimenting with it.

Even experts use shortcuts, in their work – sometime – they just press a magic button that gives them the solution for a given task.

Using a future system by one, would look to the observers in a system that doesn’t know much about the whole as magic. And could produce stories associated with legends, or fairy tales. Yet, these stories are making the readers, watchers or listeners — as part of the same system as well. Opening possibilities to them and empowering them with powers they didn’t know existed within them.

The receivers become a part of the system the transmitter give them. One, most of the time, have the power to decide whether he would like to be a part of that transmitter system. Smart people know how to avoid stepping into systems they dislike.

Microboards

I’ve been thinking for a while now, which one on the micro board to purchase, and play with. Basically microboard are small electronic boards, which you can develop on and build all sort of cool stuff.

There are these main ones on the market these days, which I know of:

  1. Raspberry PiRaspberry Pi Wiki Pages
  2. ArduinoArduino Wiki pages
  3. BBC Micro bitBBC Micro bit Wiki pages
  4. And many more over the globe

The boards got basic connectivity ports (such as usb, power supply port). And you attach to it all sort of electronics (such as led lights, sd card for storage, etc).

You can build all sort of things from these micro boards, such as:

  1. Digital slides show frame
  2. 3d printers – (for example using an arduino board)
  3. Logo turtle bots
  4. Electronic led changing display – (according to audio sound frequencies)
  5. Led text banners

After you understand the basic electronics, you can download (or better else write by yourself)  an application for the board, using the electronics you’ve attached to it.

Every board, got its own software toolkit. Where you can download the SDK to your home computer, most of them got all the available platforms (linux, windows or OSX). The SDK enable you to decide what you would like to run using this gadget you’ve just build.

As for me, I didn’t decide yet, which gadget I would like to build. And which one of all these board possibilities I would like to build it with.

Diving into 3d programming – Supplement (Web)

As I started with some 3d programming on delphi — Another requirement was raised. I needed to port some of the code, to a web based engine.

I recalled that once I played a little with VRML. Yet, after a quick search I found the following 3d WebGL engine called three.js. Due to the fact that I was stunt of the result of it, I decided to post this short supplement to my previous post.

The three.js handles the limitation of processing power on browsers, taking into consideration and a high use of the HTML5 architecture. This opensource package includes many demos, which can be viewed under the link above.

Although the package is (from a very short view), as structured as the delphi glscene. It uses the javascript in a very impressive way.

I’ll post some more notes, after building and using it a little more.

Diving into 3d Programming

Recently I’ve been looking into the area of 3d programming. Here are some few notes from this area of computing.

This area of 3d programming, would serve anyone in the:

  • 3d games programming area — (the directx, and opengl got plenty of code and samples of, for example, preforming character manipulations – character walk, or talk).
  • or in the 3d designing area – such as architects (yet, this area is more likely would use less options available to this 3d environment than a 3d games.
  • Anybody in the 3d animated movies area (unlike the 3d studio tools, for example – these abilities enable one to create a scene for example of a character doing random things – without any special effort).
    Thinking of 3d animated movies, where plenty of tools are being offered. Most of the scenes are usually being defined by the director (for example the ‘lord of the rings’ armies fight – as so on).
    I can think of a whole world, one can create, much like the 13th floor movie. Where the characters  have a life of their own.. And the platform enables them all these daily “random” activities.

When speaking on 3d programming, on windows. There roughly two basic architectures. When using dev environment such as delphi — you can write to both engines seamlessly — while both archtectures enable you to manipulate 3d environment down to detailed variables and features that you would like to achive:

DirectX and Delphi:
To program for that environment I all these, in different times:

  • Delphi FMX — Embarcadero intreduced this crossplatform graphics environment, that got plenty of goodies and abilities.
  • ksDev dxScene — (which later, became the FMX archetecture

OpenGL and Delphi:

  • glScene — an opensource OpenGL environment for use with delphi.

I’ve focused on these two, but I’m sure that looking for other alternatives one can find plenty more. One that I can recall seeing is the: vrmlScene.

—-

Here are some notes, of my first digg into the 3d environment:

I’ve dived into this ocean of shaders (which are virtually effects being done on the 3d model data) and edge detecting techniques, and all these abilities of the graphics manipulations.

After I switched from the dxScene (DirectX) to glScene (opensource opengl libraries), seems that all the searches I’ve made were much more focused. It seems that opengl is much more common than the directx ones. Although all the abilities of the two platforms are somewhat similar.

Anyhow preforming the manipulation programmer wise, got awesome abilities. Though the designated final goal must be well defined.

Here is a sample url of a question about the edges issue,. Which advise a way to preforme the edges.. Another example is this, of displaying wireframe, yet I do not know whether that answer the question as well.

Implementing the shader can be done done, as you see in the examples above, using an opengl c like syntax. OpenGL (as all software) comes in versions flavors,. And using the language in certain syntaxes is valid only for the proper opengl version.

The shader API define parameters to be sent to the rendered application. And it is very dynamic being defined in that way. For example a shader can send only color variable, or can send multiple parameters according the way it was defined by the rendering application.

Another extension of all this process can be done via direct gpu programming using for example the CG (not to be mistaken with CygWin, which is the CPP compiler for windows).

The CG is a CPP compiler for shaders.

Making DumbDisplays smarter with BigData

DumbDisplays: Most of us got a dumb display at home, a dumb display is just another CRT/old LCD display without any implementation of web connectivity or media center connected to it. Most of the new TV displays got a chip built in mostly known as a SmartTV,. What I describe is jumping to the level of a SmartTV — when you have got only a DumbDisplay.

Big Data is just another buzzword… until.. you actually implement it.. Bigdata is any service that aggregates data (whether it be Video/Audio/Text based etc’), stores and dumps it to any of the different media or platforms. Most of the bigdata on the web offer companies a API to their data,. Making the BigData mine-able — via 3rd party software.

Middle Devices – SmartTV: A middle device that is software enabled, and got inport of a web connection, and outport to the display… The connectivity to BigData engines such as FaceBook Graph Search, or Searching Twitter Database, or GPlus Search etc’ — makes the whole strength of the process described.

Of course buying a SettopBox could be an idea, yet I am suggesting a cheap and immediate alternative. What I’ll describe below is only one suggesting, out of many, of boosting your life and accessability to the web.

Steps to go forward, and leap into the era of availability, and web access;

Steps making your TV Smarter:
1. Connect dumb displays to the web, via a local internet connection.

2. Setting requirements to connect old monitors/displays/tvs to the web;
There are verity of ways to do that;

      • Extend existing devices capabilities… for example;
        – use a desktop/laptop computer, that got a port such as rca or hdmi, and defined the target display as just another monitor in your system.
        – use a dongle/mobile device (for example an old ipod, or simmilar type of device) that got the required connectors, and got the required software which will project the web to the screen.
      • Define the type of the middle device connection for example wifi or wired.

3. Pick a proper software which will be;
– configurable on the one hand..
– and on the other hand won’t require any immediate interaction while presenting the data..

Checkout stevie.com which enable you to do so cross platform on any device whether it be iOS, Android or windows devices.
Proper applications will enable you to interact with the ‘dumb’ middle device.. via mobile or with a remote services.

Stevie takes the BigData being mined on the miscellaneous databases which are accessible on the web, and just stream the data to your dumgdisplay screen. Which is virtually creating a personalized channel, which can always be reconfigured according your desires.

BigData associated with the any one of the web social Engines, can create a private channel that will present ‘Your Friends Data’. or even more personally ‘Your Family Data’.

When talking about Data and Channels, I mean it can output, (according to the right software, and quries) — A channel containing personalized;

  • video clips / Audio tracks.
  • pictures slide show.
  • Text news feeds.
  • Or any mix of any of these data types

Just another WordPress site