Tuesday, October 2, 2012

Saving JPEG Metadata the FIRST Time with WPF

So, I've been seeing a bunch of problems at StackOverflow and bumped into this new problem at work...  Turns out that setting the EXIF metadata on an image is EXTREMELY difficult.  Most solutions involve first saving the image and then opening it back up...

Of course, I couldn't just "accept" that.  Unfortunately, the work-around is a bit of a hack, but it does work reliably!

So, in the following example, assume that I have a RenderTargetBitmap from WPF...

                    JpegBitmapEncoder imageEncoder = new JpegBitmapEncoder { QualityLevel = 100 };
                    imageEncoder.Frames.Add(BitmapFrame.Create(renderTarget));

                    using (MemoryStream memoryStream = new MemoryStream())
                    {
                        imageEncoder.Save(memoryStream);
                        memoryStream.Position = 0;

                        JpegBitmapDecoder decoder = new JpegBitmapDecoder(memoryStream, BitmapCreateOptions.None, BitmapCacheOption.Default);
                        BitmapFrame frame = decoder.Frames[0];
                        BitmapMetadata metadata = (BitmapMetadata) frame.Metadata.Clone();

                        metadata.Subject = "This is a test subject.";
                        metadata.Title = "This is a test title.";
                        metadata.Comment = "This is a test comment.";

                        imageEncoder = new JpegBitmapEncoder() { QualityLevel = 100 };
                        imageEncoder.Frames.Add(BitmapFrame.Create((BitmapSource)frame, (BitmapSource)frame.Thumbnail, metadata, frame.ColorContexts));

                        using (var fileStream = new FileStream(saveFileDialog.FileName, FileMode.Create))
                        {
                            imageEncoder.Save(fileStream);
                        }
                    }

This is, of course, a hack -- it saves the image to the memory stream and then is capable of creating a "good" metadata object (normally, this blows up in your face if you try to send one in that you created on the fly).

Now, you can set it up however you like -- go nuts!

Tuesday, August 7, 2012

3D Programming and Calculating Normals

So, I've had a really difficult time with calculating normal vectors (herein referred to as "normals") in OpenGL, and I've learned a lot to make sense of it, finally.  There are a few good tutorials, but they still don't explain everything you want to know.  So, I've decided to post about how they work and how to build them.

A normal is a vector that describes a direction perpendicular (pointing away) to a point (i.e. a vertex) or a face (i.e. a polygon).  It is important for lighting, since light needs to know how to reflect off of something.  I always figured lighting should be able to figure this out -- I mean, why shouldn't it?  However, that's another topic entirely.

Vertex and face normals serve their purposes.  For example, streamlined terrain generally looks better with vertex normals.  Since OpenGL blends the colors (and therefore lighting) between each point, you'll see a gradient between all vertices of your polygon with normals.  A face normal will basically have one normal for a vertex be applied to all vertices in the polygon, so the polygon as a whole will be flat with its lighting and uniformly lit/shaded.  This looks great for something like a cube where you'll want clear lines separating the object faces.  This looks really terrible on terrain.

For the sake of this tutorial, I'm going to explain how to calculate a face normal.  Since all polygons are composed of triangles, I will be using a triangle for our face.  You can calculate a face normal by averaging all of the normals in a complex polygon together (using a geometric average is the ideal way, but sometimes slow).


So, suppose we have a triangle with vertices A, B, and C.

We need to calculate the individual vectors and multiply them together.  Please note that you need to observe the right-hand rule when calculating the normals, such that you need to always build normals or vertices in the same order.  For example, in my example above, ABC is built counter-clockwise, from A to B to C.  Some people build their triangles in a format where the vertices A, B, C are the upper left corner, upper right corner, and then the lower left corner (whereas mine are upper left corner, lower left corner, and lower right corner, respectfully).  In the alternate case, it would built clockwise.  However, there are a number of ways to build a triangle -- just make sure you either build them all clockwise or counter-clockwise.  Make sure that each triangle is calculated in the same order as the prior.

From here, we need to calculate the vector of each component.  We use the middle vertex of a triangle (again, this can change based on how you build your triangles).  For my version, it is "B."  So, we want to calculate the vector of each direction around this "pivot" vertex.

Simply put, we will have two vectors, U and V, which will be built as follows:

U = B - A
V = C - B

To do vector addition and subtraction, we just add/subtract the components, so simply put...

method AddVectors( Vertex A, Vertex B )
{
  create Vector where Vector.X = A.X + B.X, Vector.Y = A.Y + B.Y, Vector.Z = A.Z + B.Z;
}

Now that we have U and V, we'll have to multiply the two vectors together... which is a bit more painful.

method MultiplyVectors( Vector U, Vector V )
{
  create MultipliedVector;
  MultipliedVector.X = ( ( U.Y x V.Z ) - ( U.Z x V.Y ) );
  MultipliedVector.Y = ( ( U.Z x V.X ) - ( U.X x V.Z ) );
  MultipliedVector.Z = ( ( U.X x V.Y ) - ( U.Y x V.X ) );
}

Aren't you glad we used vector names U and V, instead of X and Y?  That code would be impossible to read!

Now, you have a "normal" of the face of the triangle -- but you're STILL not done.  The normal should be normalized to a unit length.  If you want to calculate a vertex normal, you may save this operation until after you're finished calculating all of your face normals.

To normalize a vertex, you need to find the unit length, which is done with Pythagorean's theorem.

The unit length is built as follows with a vector T:

UnitLength = SquareRoot( ( T.X x T.X ) + ( T.Y x T.Y ) + ( T.Z x T.Z ) );

Then, divide each component of the normal by the unit length as follows:

T.X = T.X / UnitLength;
T.Y = T.Y / UnitLength;
T.Z = T.Z / UnitLength;

You have now "normalized" your normal.

But wait, there's more!  We really wanted to get the vertex normals!

We can calculate a vertex normal by adding all of the face normals that a vertex is used in and then performing the normalization we did prior after.

Suppose we use a data structure (I have a persistent Dictionary<int, Dictionary<int, Vertex>> object I use) that contains all of the vertices of a mesh by their X, Y values.  X, Y would refer to the X and Y coordinates of the triangle mesh, which in my case (and many others) are generally uniform distance from each other (i.e. always 1 unit away from the other).

Assume my earlier triangle now has another point directly above C, called D.  We now have a quadrangle.


Assume my format has several of these next to each other to make a surface of terrain or something else.  Assume that these share points, such that D and C become the next A and B vertices for a quad that would have an E and F added on.  (i.e. quad one is built with A, B, C, D; quad two is built with D, C, E, F)

Assume also, I can calculate a face normal by providing three vertices of a triangle with the method CalculateNormal.  This would utilize the same pseudo-code mentioned above.

I could loop through a mesh as follows:

normals = create Dictionary[int][int] containing Vector;

for( y = 0; y < mesh.Height - 1; y++ )
{
  for( x = 0; x < mesh.Width - 1; x++ )
  {
    //Triangle ABC
    Vector normal = CalculateNormal( mesh[x][y], mesh[x][y + 1], mesh[x + 1][y + 1] );
    
    //Add the normal components to the vertices that are used
    normals[x][y] += normal;
    normals[x][y + 1] += normal;
    normals[x + 1][y + 1] += normal;

    //Triangle ACD
    normal = CalculateNormal( mesh[x][y], mesh[x + 1][y + 1], mesh[X + 1][y] );

    //Add the normal components to the vertices that are used
    normals[x][y] += normal;
    normals[x + 1][y + 1] += normal;
    normals[x + 1][y] += normal;
  }
}
At this point I would have all of my normals added up, but they'd be huge and non-normalized.  So, we'd normalize them by calculating the unit length as I did for a face normal, and then finally dividing the components by that unit length.

That, my friends, is how you would go about building normals from a uniform mesh with differing heights.

References:

Tuesday, July 10, 2012

JSON and the Null Character

I've been working with WebSockets lately and have been attempting at porting them across various applications.  Well, everything has been fine and dandy, until I bothered to get to the HTML5 version of our application (or rather, the future of it).

For some reason, I kept getting "invalid_token" or "invalid token" with no trace of any other information.

Turns out, C# and the WebSocket were appending \0 at the end of the data.  D'oh!

The simple fix for this (when you're having issues with JSON.parse) is to change your code from:


var jsonObject = JSON.parse(jsonEncodedString);

to

var jsonObject = JSON.parse(jsonEncodedString.replace(/\0/g, ""));

With that, your code should hopefully work!  If not, there's always http://jsonlint.com/ to attempt to validate your JSON.

Wednesday, May 23, 2012

The Amazing EventProxy

Recently at work (as these posts usually come), I've been tasked with handling loading our entire framework (after downloading it from a remote source), and executing everything in a way that let's us administrate it.  I've been delving into the entire AppDomain paradigm which was plenty fun to deal with in the 70-536, but I haven't really touched it (or read on it) in about five years.

That being said, when working with AppDomain's, you have to make sure that the objects you're using can remote across to other AppDomain's correctly -- and by doing so, the easiest method is generally to derive the classes in the calls from MarshalByRefObject.  Pretty easy, huh?  Almost.

There is one caveat to this -- if you want to access an event on your cross-domain object, it requires your class to be Serializable or also be derived from MarshalByRefObject.  Still pretty simple, right?  Mostly -- but what if you have a control or user interface element that is already derived from something else?  Something that shouldn't be serialized at all...  In my case, it is a ViewModel.

After much deliberation, the smartest idea was to create a proxy -- pretty cool and easy, right?  Almost.  The only problem is that if I hard-coded the proxy, we'd be stuck with constantly updating the proxy for each and every object.  Some of our calls are just a simple EventHandler<EventArgs>, but some of our calls are more complicated EventHandler<MakesYourMindExplodeFromAnotherAppDomainEventArgs>.  How could I possibly allow this level of flexibility...?

Enter, the EventProxy:

    public class EventProxy<T, Args> : MarshalByRefObject
        where Args : EventArgs
    {
        public event EventHandler<Args> EventOccurred;

        public EventProxy(T instance, string eventName)
        {
            EventInfo eventOccurred = typeof(T).GetEvent(eventName);
            if (eventOccurred != null)
            {
                MethodInfo onEventOccurred = this.GetType().GetMethod("OnEventOccurred", BindingFlags.NonPublic | BindingFlags.Instance);

                Delegate handler = Delegate.CreateDelegate(eventOccurred.EventHandlerType, this, onEventOccurred);

                eventOccurred.AddEventHandler(instance, handler);
            }
        }

        private void OnEventOccurred(object sender, Args e)
        {
            if (EventOccurred != null)
                EventOccurred.Invoke(sender, e);
        }
    }

It is incredibly simple.

It's also important to note that this needs to be an assembly that both AppDomains reference.

For my ViewModel's, I'm adding it as such:

   EventProxy<CrossDomainObject, EventArgs<string>> Proxy = new EventProxy<CrossDomainObject, EventArgs<string>>(MyCrossDomainObject, "StatusUpdated");
   Proxy.EventOccurred += (sender, e) => { Status = e.Value };

In this example, CrossDomainObject is my CrossDomainObject class, wherein MyCrossDomainObject is the instance of that class.  The EventArgs<string> is a generic EventArgs that takes a string.  From there, I just update my status in the ViewModel, without any issues of the SerializationException.

There are a few safe-guards that would help make sure the EventProxy doesn't blow up and Exception out to hell, but it's a good start for someone to build on top of.

Thursday, May 17, 2012

DependencyProperty vs. INotifyPropertyChanged

I've been developing UI components for work and I've noticed a lot of shift and use between DependencyProperty and implementing the INotifyPropertyChanged interface.  When and why do we use this? 

The simplest answer is that DependencyProperty is a lot more heavyweight than INotifyPropertyChanged.  Primarily, you'll see DependencyProperty much more useful for UI components and binding, rather than on the ViewModel.  Why is this?

Simply put, implementing INotifyPropertyChanged will not allow you to style or template your control off the bat.  While there are other ways to get around this, it's a bit more of a pain and why make your life painful when you can make it simple and easy?

DependencyProperty, on the other hand, will allow you to do just that.  If you implement a Gauge control, for example, and then have it so that a specific element on the Gauge has a certain color, you won't be able to put a global style on this with INotifyPropertyChanged.  DependencyProperty will register it immediately and you're good to go.

I think it really boils down to two cases.

  1. For UI control development and tight-coupling, use DependencyProperty.
  2. For everything else (when serialization is necessary) use INotifyPropertyChanged.
That being said, there is one caveat -- writing the DependencyProperty in a ViewModel is a hell of a lot easier and quicker for updating things.  Using the "propdp" snippet in Visual Studio makes coding them a breeze.

Wednesday, April 18, 2012

OpenGL Picking with the Tao Framework

Disclaimer: This is an old post, which I am migrating from my old blog, for the sake of preserving it.  It originally was posted on June, 30, 2008.
I’ve been playing around with the Tao Framework quite a bit, and I found that I had serious issues trying to figure out how to do “picking” or “selecting.” My goal was to create a 3D isometric tile map-editor (in short, a tilted-perspective map editor for a game).

There’s always the option of “drawing” under selection rendering mode — but this is ugly and complex, and didn’t work very good. The real trick that did the job, was to actually use GLUT (OpenGL’s Utility Library) to find a pixel and read the depth buffer (which had been set after the prior scene was rendered) and determine the spatial coordinates. In C++, this is an easy task, but in C#, this is a little more complex, since the Tao developers lurk in the dark at night and seldom come out into the light of day to write a tutorial or two. So, I’ve managed to figure out (after a month, tearing out of hair, and nearly a bit of seppuku tossed in the mix) how to properly do “Picking” in GL. I’m very pleased with the results, as should the Gods of GL (who roam the interweb and all GPUs).
//Variable Declaration
double Output_X, Output_Y, Output_Z;
double[] ModelviewMatrix = new double[16];
double[] ProjectionMatrix = new double[16];
int[] Viewport = new int[4];
float[] Pixels = new float[1];
//Used for some of the messy pointer work.
IntPtr PixelPtr = System.Runtime.InteropServices.Marshal.AllocHGlobal( sizeof( float ) );
//Grab Information about the Scene in OpenGL
Gl.glGetDoublev( Gl.GL_MODELVIEW_MATRIX, ModelviewMatrix );
Gl.glGetDoublev( Gl.GL_PROJECTION_MATRIX, ProjectionMatrix );
Gl.glGetIntegerv( Gl.GL_VIEWPORT, Viewport );
//Find the Depth and store it in a conventional manner with C# in mind
Gl.glReadPixels( e.X, Viewport[3] - e.Y, 1, 1, Gl.GL_DEPTH_COMPONENT, Gl.GL_FLOAT, PixelPtr );
System.Runtime.InteropServices.Marshal.Copy( PixelPtr, Pixels, 0, 1 );
System.Runtime.InteropServices.Marshal.FreeHGlobal( PixelPtr ); //yes, free the memory
//Finally grab the actual X, Y, and Z from all the data we have
Glu.gluUnProject( (double) e.X, (double) ( Viewport[3] - e.Y ), (double) Pixels[0], ModelviewMatrix, ProjectionMatrix, Viewport, out Output_X, out Output_Y, out Output_Z );
Let’s dissect this a little.

Each matrix in GL is a 4×4 matrix, so I initialize the storage for the matrices as a 16-length double array. The viewport is quite simply, just four variables, but we want to store them all in one continuous array, so that’s why the Viewport is stored as a 4-length integer array. And finally, the Pixels array is necessary for C# and it’s weirdness with typecasting code. This code is all technically “safe” because of how I use the Marshal InterOp code (also note the fact that I built a PixelPtr to associate Pixels to a pointer).

The next 3 methods grab all of the data necessary to the matrices for projection, etc, and the display settings. Pretty simple and straight-forward.

Now, we need to figure out what the “depth” is at the current location. Notice that the code has literally been pulled out of a MouseMove event, so e.Y and e.X refer to the mouse’s X and Y coordinates. The depth is stored in PixelPtr. Note that to get the Y coordinate in correct terms, I subtract the Mouse Y from the size of the Viewport.

Now, I use the Marshal to copy data back over to the Pixels pointer. Very simple, but a pain-in-the-butt to figure out on your own with lack of documentation.

Finally, we make use of gluUnproject to un-project (normally we project the display) based on the data we have and determine the location of the mouse in X, Y, and Z coordinates. This is why we have that Output_X, Output_Y, and Output_Z declaration at the top.

SharePoint Warmup Script

Disclaimer: This is an old post, which I am migrating from my old blog, for the sake of preserving it.  It originally was posted on October, 14, 2008.

A colleague of mine was discussing that after an iisreset/deployment to SharePoint, it’s pretty common that one needs to refresh/warm-up all of the possible URLs that may be accessed. Since SharePoint does caching of pages and the like, we can secretly (like a ninja) ping our destination with VBScript/Windows Scripting. Sound complicated? Hardly. Even more, I’ll make it very simple for you — just copy and paste the following script (like a ninja), and then adjust it to your needs:
'
'SharePoint Warmup Script
'
'
'Create new object to ping URLs
Dim BaseURL
Dim NewURL

BaseURL = "http://my.sharePoint.installation/sites/mySiteCollection/"
NewURL = ""

Dim Ping
Set Ping = CreateObject( "Microsoft.XMLHTTP" )

'If we don't have capability to access the XML/AJAX Object, break out
if Err.Number <> 0 Then wscript.Quit
'Start attempting to open the following URLs:

'Copy/paste the code below for several different sites/subsites/collections.
'
'Base Portal Dashboard
'
NewURL = BaseURL & "" 'Enter your new extension of the main URL
wscript.Echo "Pinging: " & NewURL
Ping.Open "GET", NewURL, False
Ping.Send

Monday, February 27, 2012

ClientAccessPolicy or Bust

I won't lie -- I'm no expert, heck, even a decent Web Service developer.  I'm a novice.  If you want an awesome user experience, I'll rock your world, but I digress...

I know WCF is supposed to be pretty easy and cool to work with, so I took it upon myself to write some of our latest "service" code for a demo.  The awesome thing about WCF is that it handles all of the underlying connections and the like for you -- all you have to worry about is the actual logic.  If you want to get a little more "dirty" you can configure which channels it runs over.

So, I wrote a handy little wrapper around our service that handles callbacks, initialization of the connection, and the like.  Whenever it called it, I kept getting a "Security error."  Of course, had I not run through the entire Silverlight debug prompt where it notified me that the service and application needed to run in the same web project, I might have figured it out sooner.

Instead, I decided to make it harder on myself (and actually easier).  I added a ClientAccessPolicy.xml to the WCF service project, similar to how one is necessary for our framework.  The WCF service and application no longer need to be hosted in the same web project -- the ClientAccessPolicy allows the Silverlight client to connect without any issues.

Now, mind you, this is probably a possible point of attack for a hacker if you're not careful, so it's probably best not to do this on a production machine -- but for local development work, it sure helps!