Developing on Staxmanade

Dynamically load embedded assemblies – because ILMerge appeared to be out.

At work, I started building a .net assembly that would probably find its way into a number of the server processes and applications around the shop. This particular assembly was going to end up containing quite a number of external open source references that I didn’t want to expose to the consumer of my library.

I set out to solve several simple requirements.

  1. Easy to use. Should be nothing more than adding a reference to the assembly (and use it).
  2. Consumer should not have to deal with the 5 open source libraries it was dependent on. Those are an implementation detail and it’s not necessary to expose those assemblies to the consumer, let alone have to manage the assembly files.

I originally got the idea from Dru Sellers’ post http://codebetter.com/blogs/dru.sellers/archive/2010/07/29/ilmerge-to-the-rescue.aspx

I gave ILMerge a try. As a post build event on the project – I ran ILMerge and generated a single assembly. Leveraging the internalize functionality of ILMerge so my assembly wouldn’t expose all of its open source projects through Visual Studio’s intellisense.

This almost gave me the output I wanted. Single assembly, compact, easy to use… Unfortunately, when I tried to use the assembly I started seeing .net serialization exceptions. Serialization from my ILMerged assembly could not be desterilized on the other end because that type was not in an ILMerged assembly, but in the original assembly. (Maybe there’s a way to work around this, but I didn’t have time to figure that out, would love to hear any comments)

So ILMerge appeared to be out, what next?

My coworker, Shawn, suggested I try storing the assemblies as resource files (embedded in my assembly). He uses the SmartAssembly product from Red Gate in his own projects, and mentioned that their product can merge all of your assemblies into a single executable – storing the assemblies in a .net resource file within your assembly/executable. This actually seemed easy to accomplish so I thought I’d give it a try.

How I did it.

Step 1: Add the required assemblies as a resource to your project. I choose the Resources.resx file path and added each assembly file to the Resources.resx. I like this because of how simple it is to get the items out.

Step 2: We need to hook up to the first point of execution (main(…), or in my case this was a library and I had a single static factory class, so in the static constructor of this factory I included the following lines of code.

static SomeFactory()
{

var resourcedAssembliesHash = new Dictionary<string, byte[]> {
{"log4net", Resources.log4net},
{"Microsoft.Practices.ServiceLocation", Resources.Microsoft_Practices_ServiceLocation},
};

AppDomain.CurrentDomain.AssemblyResolve += (sender, args) =>
{
// Get only the name from the fully qualified assembly name (prob a better way to do this EX: AssemblyName.GetAssemblyName(args.Name))
// EX: "log4net, Version=??????, Culture=??????, PublicKeyToken=??????, ProcessorArchitecture=??????" - should return "log4net"
var assemblyName = args.Name.Split(',').First();

if (resourcedAssembliesHash.ContainsKey(assemblyName))
{
return Assembly.Load(resourcedAssembliesHash[assemblyName]);
}

return null;
};
}


I’ll talk a little about each step above.



var resourcedAssembliesHash = new Dictionary<string, byte[]> {
{"log4net", Resources.log4net},
{"Microsoft.Practices.ServiceLocation", Resources.Microsoft_Practices_ServiceLocation},
};


The first chunk is a static hash of the (key=assembly name, value=byte array of actual assembly). We will use this to load each assembly by name when the runtime requests it.



AppDomain.CurrentDomain.AssemblyResolve += (sender, args) =>
{...


Next we hook into the app domain’s AssemblyResolve event which allows us to customize (given a certain assembly name) where we load the assembly from. Think external web service, some crazy location on disk, database, or in this case a resource file within the executing assembly.



// Get only the name from the fully qualified assembly name (prob a better way to do this EX: AssemblyName.GetAssemblyName(args.Name))
// EX: "log4net, Version=??????, Culture=??????, PublicKeyToken=??????, ProcessorArchitecture=??????" - should return "log4net"
var assemblyName = args.Name.Split(',').First();


Next we figure out the name of the assembly requesting to be loaded. My original implementation used the …Name.Split('’,’).First(); to get the assembly name out of the full assembly name, but as I was writing up this blog post I thought – there must be a better way to do this. So although I am putting the effort to write this out – I’m not feeling like verifying that a possible better way will work (So give this a try and let me know – try using AssemblyName.GetAssemblyName(args.Name) instead).



if (resourcedAssembliesHash.ContainsKey(assemblyName))
{
return Assembly.Load(resourcedAssembliesHash[assemblyName]);
}


Next we check that the assembly name exists if our hash declared initially and if so we load it up…



    return null;
};


Otherwise, the assembly being requested to be loaded is not one we know about so we return null to allow the framework to figure it out the usual ways.



Step 3: Finally, I created a post build event that remove the resourced assemblies from the bin\[Debug|Release] folders. This allowed me to have a test project that only had a dependency on the single assembly and verify using it actually works (because it has to load it’s dependencies to work correctly and they didn’t exist on disk).



Please consider.




  • You may not have fun if you package some of the same assemblies that your other projects may/will reference (especially if they are different versions).


  • Can’t say I have completely wrapped my head around the different problematic use cases related strategy could bring to life. (Use with care)

OData’s DataServiceQuery and removing the .Expand(“MagicStrings”) –Part II

In a previous post I elaborated on the problem of magic strings in OData service queries, and gave a quick (but lacking in depth) statically typed helper solution.

A commenter mynkow left a note stating that my solution would not work with nested objects. I initially replied asking if he could give an example (as I hadn’t run into that scenario yet being a noob to OData). He didn’t get back to me, but it wasn’t long before I ran into the problem he was talking about.

If we go back to LinqPad and look again at the Netflix OData api. Let’s say we want to pull down the People, their related TitlesDirected and the TitlesDirected ScreenFormats. (No real world scenario there – just made it up because they’re related properties). The OData query (with magic strings) would look like:

(from x in People.Expand("TitlesDirected/ScreenFormats")
select x).Take(5)

If you tried to take the above and translate it to my “no magic string” fix from the previous post you would get something like.

(from x in People.Expand(p => p.TitlesDirected /* Now what? dead end. /ScreenFormats*/ )
select x).Take(5)

Now that the problem in my solution was apparent, and using his example as a quick guide (It wasn’t quite what I was looking for, but had the general theme). The solution became more than a few lines of code and I wanted to wrap some tests around the whole thing just to verify it was all working correctly…

ODataMuscle was born:

http://github.com/Staxmanade/ODataMuscle

Sorry for the name. Just think of “Strong Typing” your OData queries and giving them a little Muscle. I threw this little project up on github since this blog is not the best place to version code and if anyone felt inclined to extend it they could easily fork it and do so.

I hacked the initial version together, and once a co-worker of mine was done with it I think he cleaned it up nicely.

This new version now supports expanding not only child properties, but grandchild properties and grandchild properties of collections. (That doesn’t seem to translate well…)

EX: our little Netflix example from above would now look like

(from x in People.Expand(p => p.TitlesDirected.Expand(p2 => p2.ScreenFormats))
select x).Take(5)

Which would translate into the following query

http://odata.netflix.com/catalog/People()?$top=5&$expand=TitlesDirected/ScreenFormats

Thanks to mynkow for the initial feedback and I hope this helps someone else…

OData’s DataServiceQuery and removing the .Expand(“MagicStrings”)

I was experimenting recently with the .Net implementation of OData and ran across one of my pet peeves. “Magic Strings”. Apparently, the .Net community’s definition of magic strings is close but seems slightly different from Wikipedia. Therefore the magic strings I’m talking about here are what you’ll find on such posts as “Functional .Net – Lose the Magic Strings.”

I don’t want to get into the magic string debate here, just that I want to snapshot this little helper (for when I need to remember to write it again and don’t want to “figure it out”). This is also not intended to be a complete overview of OData, but I will provide some getter starter links and tips (if you haven’t touched it).

Enough background show me the code: (scroll to the bottom if you don’t care about the post)

Let’s pretend we want to request a “Title” from the NetFlix OData api.

You can do this by going to the web browser and typing the following URL

http://odata.netflix.com/catalog/Titles()?$top=1

Sweet. XML, yippie. Um, no thanks. Let’s try that again. Go download LinqPad (read up on using LinqPad for querying an OData store)

Once you’ve connected LinqPad to the NetFlix OData service (http://odata.netflix.com/catalog). Now we’re ready to play around. Our URL “query” above translates in to a C# LINQ statement that looks like the below in LinqPad.

(from title in Titles
select title).Take(1).Dump()



The .Dump() is a LinqPad extension method that displays the object in the results window.




If you execute this in LinqPad you will see some data about the first Title form the Netflix OData service. In the results window scroll all the way to the right. Notice all the properties that are supposed to be a Collection<T> but have no data? To retrieve these through OData you have to extend your LINQ query with the Expand(“{propertyName}”) method.



Let’s say we want to include AudioFormats collection when we ask for the first Title.



(from title in Titles.Expand("AudioFormats")
select title).Take(1).Dump()


Notice how we have to explicitly tell the OData service to include this property when we retrieve it form the service. Not only do we explicitly tell the property name, but it’s a magic string (gag, hack, baaa) none the less. If you click on “SQL” in LinqPad result window it will show the URL used for OData queries. Our URL shows the expanded property.



http://odata.netflix.com/catalog/Titles()?$top=1&$expand=AudioFormats




Now let’s pretend (just for the sake of pretending) that your front end application’s entire data access strategy was going to sit on top of OData. Not saying this is a good thing (or a bad thing). Just sayin…



If you have a fairly complex data model and each screen in your application requests slightly different data in a slightly different way, but in the end it all essentially comes down to a set of entities and their relationships. What would you do if you had to “.Expand” all those magic stringed property names. Now, I know we’re all great at search and replace (of the magic strings). However every little step along the way where I can avoid a refactor that will break every other screen in the app, well, I think I just might take that.



Now, if you change your LinqPad query from a “C# Expression” to a “C# Program”. Copy the helper class at the bottom of this post in to the bottom of the LinqPad code window. You can now write your linq statement as follows



(from title in Titles.Expand(x=> x.AudioFormats)
select title).Take(1).Dump();


Notice the switch from magic strings to an intellisense helping, refactoring safe lambda? This trick is not new. You’ll see it in many .Net open source projects such as mocking frameworks, asp.net MVC projects etc…



Just wanted to write this little goodie down for the next time I need it. Hope this helps someone else as well.



public static class ServiceQueryExtension
{
public static DataServiceQuery<T> Expand<T, TProperty>(
this DataServiceQuery<T> entities,
Expression<Func<T, TProperty>> propertyExpressions)
{
string propertyName = propertyExpressions.GetMemberName();
return entities.Expand(propertyName);
}

public static string GetMemberName<T, TProperty>(this Expression<Func<T, TProperty>> propertyExpression)
{
return ((MemberExpression)(propertyExpression.Body)).Member.Name;
}
}

C# 4.0 Optional Parameters – Exploration.

{… Removed big long story about how I ended up writing this post which provides no value to the blog…}

Summary of big long story to at least give a little context as to why (yet another post on optional parameters):

I threw an idea out to the Moq discussion group of how we could use the named/optional parameters in a future version of Moq. (you can read the thread here) In my original feature request I displayed my lack of concrete knowledge in the named/optional parameters support that is eventually coming with .net 4.0.

Once I learned that you could place default values on interfaces it left me with questions… So, what better way to figure them out? Go test it…

Disclaimer: (Shouldn’t every blog have some context enlightening disclaimer?)
I haven’t looked up best practices or lessons learned from people that have had this language feature (VB), so I’m just doing this as an experiment for myself. Hope some of my findings help the other C#’ers wanting to learn a little about the feature.

What are optional parameters?

DimeCasts.Net, Derik Whittaker has a nice intro video # 153 - Exploring .Net 4 Features - Named and Optional Parameters

OR check out - http://tinyurl.com/yz3pc9o

 

Can an interface define a default value?

Yes!
image

 

Can I specify a default in the concrete implementation, if the interface has a default also?

Yes!

image

What happens when the concrete implementation has a different default value than the interface’s default?

If the interface has a default value specified, that is different from the concrete implementation, then it depends on what reference you’re using when executing the method.

image

In the case below we are executing the method directly off of the Foo instance and will therefore get the concrete implementation’s default value when executing.

(new Foo()).Bar() – would use the value of ‘1000’.

And in the case below we cast the Foo instance to an IFoo and it will then use the interfaces default value when executing.

((IFoo) new Foo()).Bar() – would use the value of ‘1’.

Below are some examples of the different use cases.

[TestClass]
public class UnitTest1
{
[TestMethod]
public void Should_get_the_concrete_class_default_value()
{
Foo f1 = new Foo();
f1.Bar();
f1.ParamValue.ShouldBeEqualTo(1000);
}

[TestMethod]
public void Should_get_the_interface_default_value()
{
IFoo f = new Foo();
f.Bar();
f.ParamValue.ShouldBeEqualTo(1);
}

[TestMethod]
public void Should_get_the_interface_default_value_because_of_explicit_cast()
{
Foo f = new Foo();
((IFoo)f).Bar();
f.ParamValue.ShouldBeEqualTo(1);
}

[TestMethod]
public void Should_get_the_concrete_class_default_value_because_of_explicit_cast()
{
IFoo f = new Foo();
((Foo)f).Bar();
f.ParamValue.ShouldBeEqualTo(1000);
}
}

interface IFoo
{
int ParamValue { get; }

void Bar(int paramValue = 1);
}

class Foo : IFoo
{
public int ParamValue { get; private set; }
public void Bar(int paramValue = 1000)
{
ParamValue = paramValue;
}
}


 



The next experiment - Extract Interface.



Next I tried removing the IFoo interface that I’d created manually, because I wanted to exercise the “Extract Interface…” functionality, just to see how it dealt with the these defaults.



Luckily, there were no surprises. The interface it created was exactly (less spacing) the same as I originally had.



Although it didn’t display the default constant value in the dialog during creation, there was a hint that the method signature had a default by placing [] around the int resulting in “Bar([int])”.



image




Side Tool Issue: Can’t say I like how it forced me to put the interface in a different file, I guess it’s enforcing “best practice” here, but I prefer to do this later in the dev cycle than immediately (kind of like how R# allows you to place in the file next to the original class). #ToolGettingInWay





Optional Parameter Issue: One issues I see with this solution was the dirty/icky copy/paste feeling I got when extracting the interface – the default was copied from the class to the interface.




Possible solutions to the “dirty/icky copy/paste feeling” the extract interface gives.


(in no particular order of preference)




  • Place all defaults into a constant and reference the constant in both the interface and the concrete implementation(s).


  • Don’t place the defaults in the concrete implementation (only in the interface). As you should probably not be depending on the concrete implementation to begin with, you wouldn’t need it there (and wouldn’t even call it). This would also help in the case that there are multiple concrete implementation and having to sift through the code looking for all instances to updated defaults for could be very error prone.



On the surface named parameters seem like a very simple feature to the C# language. But after delving into the feature a little, I can see there are many complicated scenarios you can get your self caught up into.



As with anything…Use with care!

Go to Definition Tip with the C# ‘var’ keyword

This may be totally obvious to the masses out there, and it isn’t much of a tip, other than to say it works

Did You Know?

F12 (Go To Definition) – works on the C# var keyword?

(That’s all there is to this post – the rest is just rambling)

I hit it on accident the other day (yes I know, F12 isn’t exactly in the usual path of accidental keystrokes, trust me it was on accident). It brought Visual Studio to a screeching halt. That is, while VS was trying to load the object browser, and satellites were linking up in outer space trying get some message sent through the Pony Express about a tweetup with the Add Reference Dialog. (Point being – loading the object browser is REALLY SLOW)

It dawned on me that the F12 (Go To Definition) keyboard shortcut works on the var keyword.

Usually I just use the tool tip window when I don’t have time to decipher why the variable’s naming isn’t clear. (good post on the subject)

image 

FYI: for those R# fans, who noticed after installing it, you lost the code metadata window in C# when F12ing it (Go To Definition). They’ve fixed it in the upcoming version http://www.jetbrains.net/jira/browse/RSRP-35547. So my satellites/pony express/tweetup/add ref dialog comment above won’t be an issue anymore. Yippee!

PowerShell – Compiling with csc.exe – more of a headache that it should have been. It is possible…

I was attempting to use PowerShell to compile a group of *.cs source files – needing the flexibility of programmatically swapping out dependent assembly references at compile time depending on certain build conditions… Don’t want to get too much in to why I needed it, just that it is doable – (more painful than initially expected), but still possible.

First let’s get a csc command we want to compile.

Second let me state that this was more of an exercise in wanting to learn PowerShell and there probably other ways of accomplishing what I needed, just seemed like a good time to start down the painful learning curve. Also note, I’m not a CSC compiler pro – I haven’t analyzed each of the “options” and weather it’s right/wrong/best practice – it just works… (thanks to Visual Studio & MSBuild for hiding how we actually should use the compiler)

Ok take a simple csc compile command – (In Visual Studio – File –> New Project -> ClassLibrary1 as a good starting point). Compile the project & check the build output window. You’ll get an output similar to the below.

C:\Windows\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /debug+ /debug:full /filealign:512 /optimize- /out:obj\Debug\PowershellCscCompileSample.dll /target:library Class1.cs Properties\AssemblyInfo.cs

Next figure how the heck to execute this in PowerShell.

& $csc $params --- NOPE
exec $csc $params – NOPE

I must have tried tens if not hundreds of methods to get the simple thing above to compile… needless to say I pinged a co-worker for some help. http://obsidience.blogspot.com/

His pointer – when trying to get big string command to execute in powershell do the following.

  1. Open up “Windows PowerShell ISE”  (on Windows 7)
  2. Paste the command prompt window (with an “&” at the beginning)
  3. look for any coloration changes like…
     image
  4. Next place PowerShell escape character [`] in front of any character where the coloration changes (They’re very subtle so look long and hard)
     image

We should now have a PowerShell string that compiles our project.

After I got that far – I cleaned up the compiler syntax for a little re-use. (You can download the project blow to check it out)

If you don’t want to see the entire csc compile in the project download above, below is the general usage…



################## Build Configuration ##################
$project_name = 'PowershellCscCompileSample'
$build_configuration = 'Debug'
#########################################################

$core_assemblies_path = 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5'
$framework_assemblies_path = 'C:\Windows\Microsoft.NET\Framework\v2.0.50727'

function global:Build-Csc-Command {
param([array]$options, [array]$sourceFiles, [array]$references, [array]$resources)

$csc = 'C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe'

# can't say I'm doing delimeters correctly, but seems to work ???
$delim = [string]""""

$opts = $options

if($references.Count -gt 0)
{
$opts += '/reference:' + $delim + [string]::Join($delim + ' /reference:' + $delim, $references) + $delim
}

if($resources.Count -gt 0)
{
$opts += '/resource:' + $delim + [string]::Join($delim + ' /resource:' + $delim, $resources) + $delim
}

if($sourceFiles.Count -gt 0)
{
$opts += [string]::Join(' ', $sourceFiles)
}

$cmd = [string]::Join(" ", $options)
$cmd = $csc + " " + $opts
$cmd;
}

function global:Execute-Command-String {
param([string]$cmd)

# this drove me crazy... all I wanted to do was execute
# something like this (excluding the [])
#
# [& $csc $opts] OR [& $cmd]
#
# however couldn't figure out the correct powershell syntax...
# But I was able to execute it if I wrote the string out to a
# file and executed it from there... would be nice to not
# have to do that.

$tempFileGuid = ([System.Guid]::NewGuid())
$scriptFile = ".\temp_build_csc_command-$tempFileGuid.ps1"
Remove-If-Exist $scriptFile

Write-Host ''
Write-Host '*********** Executing Command ***********'
Write-Host $cmd
Write-Host '*****************************************'
Write-Host ''
Write-Host ''

$cmd >> $scriptFile
& $scriptFile
Remove-If-Exist $scriptFile
}

function global:Remove-If-Exist {
param($file)
if(Test-Path $file)
{
Remove-Item $file -Force -ErrorAction SilentlyContinue
}
}

$resources = @(
#""
)

$references = @(
"$core_assemblies_path\System.Core.dll",
"$framework_assemblies_path\System.dll"
)

$sourceFiles = @(
#""
)

$sourceFiles += Get-ChildItem '.' -recurse `
| where{$_.Extension -like "*.cs"} `
| foreach {$_.FullName} `

$debug = if($build_configuration.Equals("Release")){ '/debug-'} else{ "/debug+" }

$options = @(
'/noconfig',
'/nowarn:1701`,1702', # Note: the escape [`] character before the comma
'/nostdlib-',
'/errorreport:prompt',
'/warn:4',
$debug,
"/define:$build_configuration``;TRACE", # Note: the escape [`] character before the comma
'/optimize+',
"/out:$project_name\bin\$build_configuration\ClassLibrary.dll",
'/target:library'
)

$cmd = Build-Csc-Command -options $options -sourceFiles $sourceFiles -references $references -resources $resources

Execute-Command-String $cmd
 
  

How do I get UISpy.exe on my Windows 7 Developer Machine?

I know there’s a tool somewhere in some SDK called UISpy.exe. It looks like this.

ms727247.UI_Spy_Main_Window(en-us,VS.90)[1]

You can read all about it here - http://msdn.microsoft.com/en-us/library/ms727247.aspx

But can you find it?

The docs in that link above state…

ms727247.alert_note(en-us,VS.90).gifNote: UI Spy is installed with the Microsoft Windows SDK. It is located in the \bin folder of the SDK installation path (uispy.exe) or can be accessed from the Start menu (Start\All Programs\Microsoft Windows SDK\Tools\UISpy).

If I go take a look at the (Start\All Programs\Microsoft Windows SDK\Tools\UISpy) as shown in the image below. UISpy is nowhere to be found.

image

In some forum post somewhere – I saw that I had to install the .Net Framework 3.0 SDK. Which is strange since I have the 3.5 framework already installed and the tool isn’t there. Oh well, I found it (I thought), downloaded it and tried to run the setup. Where in Windows 7 I was prompted with the following dialog.

You must use "Turn Windows features on or off" in the Control Panel to install or configure Microsoft .NET Framework 3.0.

Ok, fine I’ll do that…

image

Searching the list of features to turn on and off, the only one that looks anything like what I’m looking for (outlined in red) has already been installed.

image

Off to do more research…

Then I run into this blog http://blogs.msdn.com/windowssdk/archive/2008/02/18/where-is-uispy-exe.aspx

Eventually I download the Windows SDK for Vista Update. Run the install wizard with all the defaults except this screen…

As the blog post states – make sure you take a pre-install snapshot of the following registry key

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs
Before Install:image 

image 

After Install:
image

I reverted the changes back to v6.0A in the registry above…

And TAH-DAH UISpy.exe has been found.

C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\UISpy.exe

Pain in the A$$… But it seems to work.

Comments

Jason.Jarrett
I never experienced a dialog. However, this post is rather old and things have changed since. I have not tried to run these steps since then. Good Luck
Anonymous
Thank you, however, when i done and open the UISpy, it pop up a dialog about stopping the tool. How can i handle it!
Anonymous
I have to agree with you, real pain in the a$$. Seriously, your post helped me a lot more specifically with the registry rollback.

Thanks

Don’t forget what’s happening behind the syntactic sugar. (C# events)

In a comment posted by Lashas to the i4o project over at Codeplex (comment here), he suggested a great performance improvement where instead of assigning an event’s handler directly to a handler method callback, we assign it to a field containing the handler. Initially I was skeptical of his change until I profiled it for myself. And after thinking about it for a second it made total sense.

The original implementation of adding an event handler to the property we used the C# syntax as follows.

(item as INotifyPropertyChanged).PropertyChanged += IndexableCollection_PropertyChanged;

where “IndexableCollection_PropertyChanged” is a method defined as

private void IndexableCollection_PropertyChanged(object sender, PropertyChangedEventArgs e)
{…}

Lashas’s suggested performance improvement was to initialize a private field, assign it to the IndexableCollection_PropertyChanged and add/remove the property changed handler from the field instead of the example outlined above.

Field definitaion:

private PropertyChangedEventHandler _propertyChangedHandler;

Initialization (in the constructor of the class)

_propertyChangedHandler = IndexableCollection_PropertyChanged;

Usage then becomes

(item as INotifyPropertyChanged).PropertyChanged += _propertyChangedHandler;

Lashas reported approximately 30-40% performance improvement when adding/removing items from the collection by the suggested change.

Why is that?

Well, if you think about it, what is this line?

(item as INotifyPropertyChanged).PropertyChanged += IndexableCollection_PropertyChanged;

but just a little Syntactic Sugar on top of actually doing…

(item as INotifyPropertyChanged).PropertyChanged += new PropertyChangedEventHandler (IndexableCollection_PropertyChanged);

and by the suggested change of assigning a field to the property change we avoid all the “new PropertyChangedEventHandler(…)” calls. This object creation can happen once in the constructor of the collection and not on every add/remove of an item.

Just a good reminder that you need to constantly be aware of what is happening under the covers. The saying most complexity can be solved by another layer of abstraction can cause issues when you don’t truly understand (or at least remember) what that layer is actually doing.

C# (checked) keyword – Learn something new every day

I’ve been programming against the .net framework since the late beta of version 1.0 came out. (I had a couple year hiatus when I took a job doing some Linux & C++) however, suffice it to say, I’ve been using the language a long time. I still find it amazing that there are features in the language that have been a round a long time and still manage stumbling across things I’ve never seen before.

Today’s newly learned feature I saw in a blog by bogardj over at LostTechies where he wrote up a little Expressions Cheat Sheet.

In his blog, I notice the “checked” keyword had the blue syntax highlighting. (which usually means it’s a C# keyword)

image

So, to verify, I copied the word into my C# editor and what do you know… It is a C# keyword. This is probably something most if not all C# programmers out there already know and I, for some reason have missed this.

Here’s a blurb about it on the stack overflow “Hidden features of C#”.image

And of course you can get the official details @ http://msdn.microsoft.com/en-us/library/74b4xzyw(VS.71).aspx

I decided to explore this a little. As an academic exercise I wanted to see what the difference would be between using the checked keyword with a cast operation and compare that to the System.Convert.ToByte operation.
image
Will generate the following error. System.OverflowException : Arithmetic operation resulted in an overflow.

image
While using the System.Convert class will generate a different error. System.OverflowException : Value was either too large or too small for an unsigned byte. 

And last experiment I threw both of the conversion attempts in a loop and fired up the ANTS Profiler which shows that using the built in checked keyword with a cast operation is faster than using the BCL conversion.

image 

As to which one is better I would probably have to dig deeper into other side affects that one may have over the other… Up front I don’t see any issue with the cast and checked keyword combo.

Comments

justinmchase
Good to know. Actually I did know about it but I think I had it backwards, I thought that it was checked by default and therefore the keyword was only useful if used in an explicitly unchecked block. Your tests here seem to disprove that, so thanks for the info.

(-2) for [Fluent] Specification Extensions

If you read my previous posts about [Fluent] Specification Extensions then you know that I'm still in an experimental phase of this idea.

  1. Fluent Specification Extensions
  2. (+1) for [Fluent] Specification Extensions

There are two more things I've found that are going against the specification extensions idea. The first one below is related to all specification extensions, not just the fluent flavored specification extensions.

-1. If you have a test that for some reason has an assertion in the middle of some code.
[Fact]
public void MockAsExistingInterfaceAfterObjectSucceedsIfNotNew()
{
var mock = new Mock<IBag>();

mock.As<IFoo>().SetupGet(x => x.Value).Returns(25);

((IFoo)mock.Object).Value.ShouldEqual(25);

var fm = mock.As<IFoo>();

fm.Setup(f => f.Execute());
}

It's a little difficult to tell where the assertion is because you don't get the syntax highlighting help from the static class name "Assert".

Assert.Equal(25, (IFoo)mock.Object);

However I don't think the reason above is anything to stop using the pattern for a couple of the following:


  1. How often do you have assertions in the middle of your code?
  2. If you are doing your tests right, it doesn't take long to scan the above code for the ".Should..." extension method.
-1. Daniel from the Moq project tweeted Scott Bellware about his spec unit projects SpecificationExtensions.cs

@bellware shouldn't specunit ShouldNotEqual/etc. return the actual object rather than the expected? To chain other calls http://is.gd/iJuB

@bellware and ShouldBeNull/NotBeNull could also return the object: obj.ShouldNotBeNull().ShouldEqual(...), right?

with Scott's response being...

@kzu no because that would contribute to crowding many observations into a single method

To that I say, True, if you are doing pure Context/Specification based unit testing. However most of us aren't actually doing it (maybe we should?) so why not allow the test to say

someString
.ShouldNotBeNull
.ShouldEqual(...)?

So for now... I'm going to continue to go with the Fluent Specification Extensions.

(+1) for [Fluent] Specification Extensions

If you read my previous post about Fluent Specification Extensions then you know that I'm still in an experimental phase of this idea.

I'd like to share one more positive I found by using the specification extensions in my testing framework. This benefit is there weather you use standard specification extension methods or try the fluent specification extensions. The idea is very basic, but I didn't even realize it's benefit until I ran into it directly.

And the great benefit is... (drum roll...errr..actually it's not mind blowing). By using extensions methods to provide assertion extensions on the objects we're testing, we've abstracted the actual testing framework's assertion process. (told you it wasn't mind blowing, but read along and see an example of how this abstraction could be good)

Now I know most times you won't ever change testing frameworks, however I just ran into this when attempting to port the Castle.DynamicProxy2 (DP2) to Silverlight. Their test project leveraged the NUnit testing framework, which hasn't officially been ported. You can find a quick port by Jeff Wilcox that will run in the Microsoft Silverlight Unit Testing Framework. However when I was porting the DP2 that hadn't been done, and I didn't feel like porting NUnit at the time.

So, by providing this abstraction layer (through the extension methods). You could then go in and easily swap what testing framework your project is leveraging to do it's assertions.

NOTE: the port from NUnit to MSFT wouldn't have been that easy as the [TestMethod] in MSFT is sealed so I couldn't create a [Test] attribute that inherited from [TestMethod] to get the SL testing framework to run the tests w/out changing the DynamicProxy test code...aside from that issue...

Let's take a concrete example of this abstraction benefit.

Notice how the Assert.IsInstanceOfType() in both NUnit and Microsoft's testing framework have the parameters reversed.

NUnit:

image

Microsoft:

image

If you were trying to switch from NUnit to MSFT or visa versa, a simple search and replace on [Test] for [TestMethod] would suffice for the majority of the needed port. However the Assert.IsInstanceOfType() would fail at compile time because of the parameter order. (and who know what else exactly is different)

If you could provide that layer of abstraction for the assertions, then to switch between NUnit and MSFT or visa versa would remain very simple, as you would only have to provide the framework specific changes only once.

Fluent Specification Extensions

FYI: If you're familiar with extension methods, and how to use them in testing sceneries...the interesting part of this post is at the bottom starting at: "Ok, on to the point..."

The C# extension methods give some amazing power when it comes to extending functionality of objects (we don't own) and I've spotted a pattern on several blogs and example unit testing snippets, especially in the Context Specification style testing areas that I find interesting.

The concept is to basically use the C# extension methods within a unit testing environment to give the system under test (SUT) more readability/understandability within the test code itself.

Here's an example of how you might normally write a unit test given the following SUT.

public class SystemUnderTest
{
public SystemUnderTest() { PropertyUnderTest = "Hello World!"; }
public string SomeStringProperty { get; set; }
public bool SomeBoolProperty { get; set; }
}

You might write some unit tests that might look like...

var sut = new SystemUnderTest();

Assert.IsTrue(sut.SomeBoolProperty);
Assert.AreEqual(sut.SomeStringProperty, "Hello World!");

Now, the assertions above are small enough it's pretty easy to tell what's going on, however when you think about what your looking at, it actually present the best readability.

Let's take the string's AreEqual assertion for example...  You first read the "AreEqual", so now you have to allocate some (undefined as of yet) space in your head to store some data points that need to be evaluated all at once. (maybe I'm getting lazy as I get old, but the less I have to think when reading tests the more time I can spend understanding the domain being tested...)

Again, the example is over simplified, but I think you get the point.

What if you could make the test syntax read and flow in a very readable and understandable manner?

That's what the specification extensions give you. Given the two tests above and an a couple helper extension methods living in the testing library I could write something like.

var sut = new SystemUnderTest();
sut.SomeBoolProperty.ShouldBeTrue();
sut.SomeBoolProperty.ShouldEqual("Hello World!");

It may just be me, but that just feels better, is more understandable, and the great thing is I didn't have to impact my domain objects to support this style of test...

Another great benefit is you don't have to type "Assert.xxxx(YYzzz)" each time you want to create an assertion. You can just type sut.SomeThing.{this where you get help from intellasense} giving you some great context based assertion options.

I googled for a library that had a pre-built set of extension assertions and ended up finding the http://code.google.com/p/specunit-net/source/browse/ by Scott Bellware. If you dig into the source of the project you can find a helper class called SpecificationExtensions.cs which basically gives you all the "Should..{your assertion here}" extension methods.

Ok, on to the point real point (sorry it's taken so long).

After downloading and playing with the extension specifications from Spec Unit, I thought what if we made that more fluent?

So I gave it a quick spike and instead of writing some tests that look like...

sut.SomeStringProperty.ShouldNotBeNull();
sut.SomeStringProperty.ShouldBeOfType(typeof(string));
sut.SomeStringProperty.ShouldEqual("Hello World!");

You could have less wordy code and still retain all the meaning and readability with a set of fluent specification extensions.

sut.SomeStringProperty
.ShouldNotBeNull()
.ShouldBeOfType(typeof(string))
.ShouldEqual("Hello World!");

I haven't figured out what sorts of bad things this style of assertion could bring... but we'll experiment for a while...

Here's an example console app with the extensions included.

DISCLAIMER: I haven't tested all the extensions so if you notice any issues please feel free to let me know...

Comments

Jazz
I just added this helpful extension method to my code. Thought of sharing it.

public static IEnumerable ShouldContain(this IEnumerable collection, Func expectedCriteria)
{
collection.Any(expectedCriteria).ShouldBeTrue();
return collection;
}
tims
You might find this useful:

http://code.google.com/p/shouldit/

ShouldIt is an open source library of fluent specification extensions that can be used with any unit testing framework.
DB
Whats up Stax! Man, you are always deep into stuff I can't understand!

WCF Service Proxy inside Silverlight with a generic type

We've implemented a silverlight 2 business application communicating through WCF and I just had to blog about something I found possible in .net in general...

On the server side we have a very simple generic object used to communicate validation issues back to our Silverlight client when a web service method is called. Here's the basic interface.

public interface IValidatedResult<T>
{
  T Result { get; set; }
  List<string> ValidationIssues { get; set; }
}

Now if you had a method that exposed this generic result object through your web service...

public ValidatedResult<long> StringLength(string param1)
{
  return new ValidatedResult<long>(param1.Length);
}

Now if you to to the silverlight project tell Visual Studio to generate a proxy for you (against the service you just created) it will give you a proxy with an object that is not generic. You end up with some autogenerated code that looks more like...

public partial class ValidatedResultOflong : object, System.ComponentModel.INotifyPropertyChanged
{

  private long ResultField;

  private System.Collections.ObjectModel.ObservableCollection<string> ValidationIssuesField;

  [System.Runtime.Serialization.DataMemberAttribute()]
  public long Result
  {
    get
    {
      return this.ResultField;
    }
    set
    {
      if ((this.ResultField.Equals(value) != true))
      {
        this.ResultField = value;
        this.RaisePropertyChanged("Result");
      }
    }
  }

  [System.Runtime.Serialization.DataMemberAttribute()]
  public System.Collections.ObjectModel.ObservableCollection<string> ValidationIssues
  {
    get
    {
      return this.ValidationIssuesField;
    }
    set
    {
      if ((object.ReferenceEquals(this.ValidationIssuesField, value) != true))
      {
        this.ValidationIssuesField = value;
        this.RaisePropertyChanged("ValidationIssues");
      }
    }
  }

  // stripped out the INotifyPropertyChanged goo
}

Notice the non generic type ValidatedResultOflong that was generated? This non generic object is great and all except when you want to do some generic processing on these objects. For things like error handling, validation handling... if we had to create different handling methods for all of these different objects, that could prove to be laborious...

Say I wanted to write an extension method to do some generic processing on all objects that are a ValidatedResult of T... Unfortunately there is no common signature we can key off of to write this method with the proxy code generated by V.S.

Then I thought I would try something... Can you have a partial class in one area which contains a common property, in this case each contains a "Result" and a "ValidationIssues" property and another partial class in a different location that declares it implements an interface which defines that "Result" and "ValidationIssues" property... and would that compile?

So I wrote my first test...

Here is our auto generated partial class simulated...

public partial class Foo
{
  public bool Result { get; set; }
}

I then wrote a generic result of T to define the object has a Result property.

public interface IResult<T>
{
  T Result { get; set; }
}

And now the specific implementation with a long Result type.

public partial class Foo : IResult<long> {}

After putting those three structures together I hit Build in VS and to my surprise (at first, but now it makes total sense) it compiled... This was great news. This meant I could create a generic processor for my wcf objects in silverlight... I'll show how on the silverlight side below...

I defined the validated result contract as follows...

public interface IResultProperty<T>
{
  T Result { get; }
}

public interface IValidatedResult<T> : IResultProperty<T>
{
  List<string> ValidationIssues { get; set; }
}

This meant I could quickly create partial class stubs for each of the wcf generated objects that looked like... ValidatedResultOf{object} and would define to the compiler that all these objects truly implemented the ValidationIssues and Result property.

Here's an example of the partial class for the ValidatedResultOflong

public partial class ValidatedResultOflong : IValidatedResult<long> { }

With that in place, this meant I could create some generic handling methods for all of my objects that now implement IValidatedResult<T>...

public static bool HasValidationIssues<T>(this T validatedResult)
where T : IValidatedResult<T>
{
  if (validatedResult != null &amp;&amp;
  validatedResult.ValidationIssues != null &amp;&amp;
  validatedResult.ValidationIssues.Count > 0)
  return true;
  else
  return false;
}

Don't know if i've very heard anyone talk about one partial class containing some property or common method and being able to create another partial class that defines the interface contract for that... Pretty cool...

Comments

Jason.Jarrett
Interesting observation. Thanks for sharing.

I haven't pushed it very far (bool, string, MyCustomObject, etc)

Boris Modylevsky
Thanks for the post. It's really exciting to receive a proxy of Generic class. It really works smoothly with "simple" classes. But more complex ones result with random names in Silverlight proxy. For example, my `_SimpleTree_` became _SimpleTreeNodeOfItemErNMAaNV_
Jason.Jarrett
Ok, I quickly threw an example together. I didn't comment much, but the code's there and "works on my box..."
Let me know if you have any troubles either understanding it, or getting it working.

I've never used this file host service, but giving it a try... I've placed the project here [ http://www.filesavr.com/validatedresultsample](http://www.filesavr.com/validatedresultsample)

Also the only concepts to really look at here are the `ValidatedResult<T>`` and the notion of the partial classes used along side the VS service reference code to get the extension methods to work... This by no means follows best practices with some of this stuff.

Good luck!</div>
greg
... very much appreciated!!
Jason.Jarrett
Greg,
I will try to put together an example and post it here soon.</div>
greg
... too many snippets for me to get the big picture... It looks promising, though. Do you have a simple working example you could post? I am interested in the generic proxy concept whether used within Silverlight or otherwise. Thanks in advance.

BinarySearchOrDefault()

When you need to use BinarySearch on a large List<T>, you need to declare a long to hold the return index value of the search. When the item in the list is not found the value returned from the binary search is negative and depending on it's value can mean different things. However most of the time I use this search method I don't care about these other options. Either give me the value I am searching for or give me the default value of <T>.

I wrote a little helper extension that will return the found object or if the item could not be fount it returns default(T).


public static T BinarySearchOrDefault<T>(this
List<T> list,
T item,
IComparer<T> comparer)
{
int returnIndex = list.BinarySearch(item, comparer);

if (returnIndex >= 0)
return list[returnIndex];
else
return default(T);
}