There are times when ReSharper gets confused. It might be unable to locate references if you've been re-jigging your packages (oo-er) or it might be unable to perform a refactoring on a file because its in read-only mode for no good reason.
While there's always ReSharper clear caches through the Visual Studio settings...
I've been using the command line more recently, so decided to make a script to do this from the root directory of the repo. Depends on having Git installed on the command line. And expects the ReSharper version is 8.2. And that you haven't tweaked where RS keeps it caches.
Its a PowerShell script, so add the contents to your PowerShell profile and set an alias if you like.
Evolutionary Developer
Monday 3 August 2015
Tuesday 8 April 2014
ServiceStack.Redis and StructureMap
I've tripped up on this a couple of times so I've just decided to layout the right way to make use of the ServiceStack.Redis client library when using StructureMap DI Container. The mistake I used to make was to have my controller or service depend upon an IRedisClient. StructureMap would be configured to provide an instance of an IRedisClient by invoking GetClient() on the whatever singleton instance of IRedisClientsManager I was using, usually the PooledRedisClientsManager. Much was learned from this SO answer.
This was a problem because it limited the re-usability of the consuming controller/service. Such a service might provide a method that used the injected IRedisClient to interact with the Redis instance. The IRedisClient implements IDisposable and should be wrapped in a using statement in order to release the resource and close the channel when it is has completed the task. In doing so, you make the method on the service non-idempotent: it has the side-effect of changing the state of the service by disposing of its injected IRedisClient instance. You are only able to successfully call the service method once, calling it a second time throws an exception.
The solution is fairly simple. Instead of having the service depend directly on the IRedisClient, the service instead needs to depend on the IRedisClientsManager. This then allows the service to call the GetClient() method on the IRedisClientsManager itself and create an instance of IRedisClient every time the method is called. Idempotency is achieved because a client of the service can call the method any number of times and not cause any unintended side-effects. The state of the service is no longer affected by disposing of its only instance of IRedisClient. The IRedisClient instances that are created are still disposed of, but the service has the ability to create a new one for itself.
The pattern is very similar to the RavenDB IDocumentStore and IDocumentSession that I am familiar with, so I surprised it took me so long to figure out!
This was a problem because it limited the re-usability of the consuming controller/service. Such a service might provide a method that used the injected IRedisClient to interact with the Redis instance. The IRedisClient implements IDisposable and should be wrapped in a using statement in order to release the resource and close the channel when it is has completed the task. In doing so, you make the method on the service non-idempotent: it has the side-effect of changing the state of the service by disposing of its injected IRedisClient instance. You are only able to successfully call the service method once, calling it a second time throws an exception.
The solution is fairly simple. Instead of having the service depend directly on the IRedisClient, the service instead needs to depend on the IRedisClientsManager. This then allows the service to call the GetClient() method on the IRedisClientsManager itself and create an instance of IRedisClient every time the method is called. Idempotency is achieved because a client of the service can call the method any number of times and not cause any unintended side-effects. The state of the service is no longer affected by disposing of its only instance of IRedisClient. The IRedisClient instances that are created are still disposed of, but the service has the ability to create a new one for itself.
The pattern is very similar to the RavenDB IDocumentStore and IDocumentSession that I am familiar with, so I surprised it took me so long to figure out!
Saturday 29 March 2014
F# Type Providers
Solving F# problems at a Skills Matter session earlier this week allowed me to fully appreciate the power of the Type Provider. The task can be found here for those interested: https://github.com/tpetricek/Dojo-Type-Provider-Treasure-Hunt
The aim of this post is not to provide the solution to the problem but to underline useful of the type provider - which was the ultimate goal behind the exercise. Type Providers succeed at bridging the gap between code and data. Type providers makes data easy to navigate by extracting the schema and publishing it through intellisense.
This is a gift for developers who are used to working with static types. Type Providers have been written for all the most popular data sources, so you can perform tasks against almost anything as though it had already been translated into domain objects. This is often the goal of an ORM, to abstract away the plumbing of persistence interaction. However, Type Providers are more data-driven rather than being a 'code first' affair. You specify the source of the data and the Type Provider will do the heavy lifting of examining the schema, taking the naming conventions and sample data and presenting the developer with a suitable API to explore.
If you were to try to do this with an ORM you would need to create the DTO classes yourself and plug them into the Data Context (or equivalent). Type Providers are productivity boosting in this sense as they require very little bootstrapping.
The power of intellisense:
Type Providers have done what WCF Web Services did - making strongly typed data access possible over a network - but has liberated the publisher of that data to do so in whatever format they choose. Previously, if you wanted your data to be discoverable to the developer in the IDE you would have needed to create a Web Service wrapper around your data and model all the types yourself. Now you can just provide direct access to the data source and let Type Providers do the rest!
I'm yet to make use of FunScript but this looks like a great implementation of a Type Provider - applying the same principles to the html DOM API. Meaning you can create strongly typed web pages! It even includes common js libraries so you can create rich SPAs and interact with your domain with the same workflow and familiarity across both contexts.
The aim of this post is not to provide the solution to the problem but to underline useful of the type provider - which was the ultimate goal behind the exercise. Type Providers succeed at bridging the gap between code and data. Type providers makes data easy to navigate by extracting the schema and publishing it through intellisense.
This is a gift for developers who are used to working with static types. Type Providers have been written for all the most popular data sources, so you can perform tasks against almost anything as though it had already been translated into domain objects. This is often the goal of an ORM, to abstract away the plumbing of persistence interaction. However, Type Providers are more data-driven rather than being a 'code first' affair. You specify the source of the data and the Type Provider will do the heavy lifting of examining the schema, taking the naming conventions and sample data and presenting the developer with a suitable API to explore.
If you were to try to do this with an ORM you would need to create the DTO classes yourself and plug them into the Data Context (or equivalent). Type Providers are productivity boosting in this sense as they require very little bootstrapping.
The power of intellisense:
Type Providers have done what WCF Web Services did - making strongly typed data access possible over a network - but has liberated the publisher of that data to do so in whatever format they choose. Previously, if you wanted your data to be discoverable to the developer in the IDE you would have needed to create a Web Service wrapper around your data and model all the types yourself. Now you can just provide direct access to the data source and let Type Providers do the rest!
I'm yet to make use of FunScript but this looks like a great implementation of a Type Provider - applying the same principles to the html DOM API. Meaning you can create strongly typed web pages! It even includes common js libraries so you can create rich SPAs and interact with your domain with the same workflow and familiarity across both contexts.
Monday 27 January 2014
Decoupling your Web Deploy Build Script from Team City with Powershell
Continuing my (inadvertent) series on stream-lining your deployment process this next instalment is about bundling your deployment script in with your deployment package and breaking dependencies between Build and Deployment Configurations.
Previous Topics:
1 - App Settings are for Variables that Change with the Environment
2 - Separate Build and Deploy Configs with Team City and MS Web Deploy
3 - Specifying Environment Variables at Deploy Time not at Build Time
When I last left this topic, environment settings were captured in Team City's Build Parameters. They were applied at deploy time using a deployment script that called the web deploy command line executable. This helps keep sensitive settings out of source control. The bad news is that this deploy script was also maintained in Team City.
The reason this is a problem becomes clear when we need to add or remove a parameter from the configuration. A new parameter name and value must obviously be added to the configuration before it can be used in the deployment. However, we are also required to update the deploy script to apply this parameter during the deployment. What we have is a dependency on the build package on the deploy script. The deploy script cannot be updated long in advance of a new parameter being added, as a deployment in the meantime that uses the new parameter where it is not yet needed will cause the build to fail. We need to move away from a shared deploy script and towards one that is specific to the package it is going to deploy.
The answer here is to bring the deploy script into source control. That way, we can manage the script like any other code artifact with version history and also, crucially, configure the script such that it will apply all the build parameters to the deployment package at the time it should be deployed. We can then add the actual parameter name and value to Team City far in advance of the actual deployment time, as we know our build script will only try to make use of the new parameter when it is executed, whilst parallel deployments can continue unaffected.
A deployment script might look something like this:
MyWebApplication.deploy.cmd /y /M:%TargetDeployUrl% /u:%Username% /p:%Password% /A:Basic -allowUntrusted "-setParam:name='LoggingEmailAddress',value='%LoggingEmailAddress%'" "-setParam:name='ServiceEndPoint',value='%ServiceEndPoint%'"
This contains only two build parameters (environment variables), but you can see that its fiddly enough to warrant version history. All we are doing is calling a command line program, so the easiest thing to do would be to create another .cmd in the same directory. I'm going to use a Powershell script to do the same because it will make it easier to work with a collection of Target Deploy Urls that we shall see later on.
The plan is to capture this call in a powershell script which is then called by our deployment build configuration in Team City. Team City will need to pass the build parameters to the powershell deploy script so that the actual values can be substituted upon execution. The simplest way to do this is with Environment Variables. Enviroment Variables are read from powershell using the $env: prefix. You'll see these in action in the script below. Warning: Do not use periods or dashes in your environment variable names as powershell has trouble escaping these characters - use underscores if necessary. Don't worry about the 'env.' that is prepended to your variable. This does not affect your deploy script.
The powershell script should be added to the root of the solution and committed to source. When the Build Configuration is run (that's build, not deploy see my previous article on separating build and deploy steps) we will need to make sure that this script is 'bundled' in with the deployment package. The deployment package contains the web deploy .cmd program that we need to call, so we should put our script in the same directory as this. To do this, configure an additional Artifact Path in the General Settings of your build configuration in Team City. We'll call our deploy script Deployment.ps1.
The deploy build configuration now has access to this script. The single build step of our deploy configuration just has to invoke Deployment.ps1. The settings shown in the image below were all I needed to enable this. The environment variables do not have to be referenced directly - this is a good thing. They are all accessible by the deploy script throughout the deployment, permitting the script to make use of whatever environment variables it needs.
Lastly, the contents of Deployment.ps1 are shown below. The main reason for using a powershell script is now clear: the ability to deploy to any number of targets. Running through the script, we see that it first defines a function to get the working directory. This is needed in order to reference the web deploy .cmd by its absolute file path. Then, we read in the TargetDeployUrls environment variable - as defined in our deploy configuration in Team City - and split it into an array on the comma character. We then loop over the array, calling the web deploy .cmd once per target. You can see that the long command is split into several lines for readability using a here-string. The line breaks are then removed so a single line operation can be performed. Note where the $env:LoggingEmailAddress and $env:ServiceEndPoint parameters are referenced. The command is executed using the Invoke-Expression alias iex.
The array loop has the advantage of allowing us to deploy to environments with any number of host machines. For example, in our staging environment we have only one server that needs to be deployed to, whilst our production environment has two. This deployment script caters for both these scenarios - all you need to when deploying to production is provide a comma separated list of deploy target urls. However, in its current configuration the script applies the same username and password to all servers. This might not be the case in every environment, but that improvement will have to wait for another day!
Previous Topics:
1 - App Settings are for Variables that Change with the Environment
2 - Separate Build and Deploy Configs with Team City and MS Web Deploy
3 - Specifying Environment Variables at Deploy Time not at Build Time
When I last left this topic, environment settings were captured in Team City's Build Parameters. They were applied at deploy time using a deployment script that called the web deploy command line executable. This helps keep sensitive settings out of source control. The bad news is that this deploy script was also maintained in Team City.
The reason this is a problem becomes clear when we need to add or remove a parameter from the configuration. A new parameter name and value must obviously be added to the configuration before it can be used in the deployment. However, we are also required to update the deploy script to apply this parameter during the deployment. What we have is a dependency on the build package on the deploy script. The deploy script cannot be updated long in advance of a new parameter being added, as a deployment in the meantime that uses the new parameter where it is not yet needed will cause the build to fail. We need to move away from a shared deploy script and towards one that is specific to the package it is going to deploy.
The answer here is to bring the deploy script into source control. That way, we can manage the script like any other code artifact with version history and also, crucially, configure the script such that it will apply all the build parameters to the deployment package at the time it should be deployed. We can then add the actual parameter name and value to Team City far in advance of the actual deployment time, as we know our build script will only try to make use of the new parameter when it is executed, whilst parallel deployments can continue unaffected.
A deployment script might look something like this:
MyWebApplication.deploy.cmd /y /M:%TargetDeployUrl% /u:%Username% /p:%Password% /A:Basic -allowUntrusted "-setParam:name='LoggingEmailAddress',value='%LoggingEmailAddress%'" "-setParam:name='ServiceEndPoint',value='%ServiceEndPoint%'"
This contains only two build parameters (environment variables), but you can see that its fiddly enough to warrant version history. All we are doing is calling a command line program, so the easiest thing to do would be to create another .cmd in the same directory. I'm going to use a Powershell script to do the same because it will make it easier to work with a collection of Target Deploy Urls that we shall see later on.
The plan is to capture this call in a powershell script which is then called by our deployment build configuration in Team City. Team City will need to pass the build parameters to the powershell deploy script so that the actual values can be substituted upon execution. The simplest way to do this is with Environment Variables. Enviroment Variables are read from powershell using the $env: prefix. You'll see these in action in the script below. Warning: Do not use periods or dashes in your environment variable names as powershell has trouble escaping these characters - use underscores if necessary. Don't worry about the 'env.' that is prepended to your variable. This does not affect your deploy script.
The powershell script should be added to the root of the solution and committed to source. When the Build Configuration is run (that's build, not deploy see my previous article on separating build and deploy steps) we will need to make sure that this script is 'bundled' in with the deployment package. The deployment package contains the web deploy .cmd program that we need to call, so we should put our script in the same directory as this. To do this, configure an additional Artifact Path in the General Settings of your build configuration in Team City. We'll call our deploy script Deployment.ps1.
The deploy build configuration now has access to this script. The single build step of our deploy configuration just has to invoke Deployment.ps1. The settings shown in the image below were all I needed to enable this. The environment variables do not have to be referenced directly - this is a good thing. They are all accessible by the deploy script throughout the deployment, permitting the script to make use of whatever environment variables it needs.
Lastly, the contents of Deployment.ps1 are shown below. The main reason for using a powershell script is now clear: the ability to deploy to any number of targets. Running through the script, we see that it first defines a function to get the working directory. This is needed in order to reference the web deploy .cmd by its absolute file path. Then, we read in the TargetDeployUrls environment variable - as defined in our deploy configuration in Team City - and split it into an array on the comma character. We then loop over the array, calling the web deploy .cmd once per target. You can see that the long command is split into several lines for readability using a here-string. The line breaks are then removed so a single line operation can be performed. Note where the $env:LoggingEmailAddress and $env:ServiceEndPoint parameters are referenced. The command is executed using the Invoke-Expression alias iex.
The array loop has the advantage of allowing us to deploy to environments with any number of host machines. For example, in our staging environment we have only one server that needs to be deployed to, whilst our production environment has two. This deployment script caters for both these scenarios - all you need to when deploying to production is provide a comma separated list of deploy target urls. However, in its current configuration the script applies the same username and password to all servers. This might not be the case in every environment, but that improvement will have to wait for another day!
Friday 10 January 2014
Why Functional Programming?
I was in attendance at FPDays2013 to hear Bodil Stokke's talk: Programming, Only Better. I revisited this talk recently in order to put together the case for functional programming and F# to my colleagues. The first part of my presentation is largely taken from Bodil's talk, so this is basically a transcript with a couple of points from different sources. Primarily Scott Wlaschin's F Sharp for Fun and Profit.
Reasoning - We reason about code when we are trying to infer what it will do. We reason to better understand what is going on. Debugging is the most tangible form of reasoning. Typically, we are following the code path and understanding the flow of data. In every program we write we have to reason about what the code is doing. In its most basic form, we are walking through the program in our minds.
Testing - while we can be sure that programs are always reasoned about - they are certainly not always tested. That is because testing is useful, but limited. Testing can only show you the presence of bugs, but never their absence. TDD is based around this fact. The Red; Green; Refactor; process highlights the presence of a bug as the code is not yet written to perform this task - so the test fails. Writing the code to make the test pass is the process of removing a bug. Just because a system has 100% code coverage it cannot be said to be 'bug free'.
State - object oriented languages idealise encapsulated state. OO languages are useful precisely because we use more code to manipulate an object's state. So state changes with time - known as entropy in the mathematical world. Most of the time, we are intentially changing state in a useful way. But state is also permitted to be changed by something outside our control (unless we are very careful). The code example below highlights what it means to have an object enter a 'bad state' and not have that communicated back to the calling code.
State Spoils Testing - how can you test an object in all its possible states? You can only test for the states you think the object might enter into. State increases exponentially, so the more potential state you create, the lower the meaning of your tests!
State Spoils Reasoning - state allows for unpredictability. You can never be sure that you will get the same output with the same input. An objects state can change, so how can you be sure of it at any given time? This becomes even worse when you consider concurrent programs. It is much harder to describe the control flow of a program concurrent stateful program. In OO you have to use Locks to be able to protect against unwanted state change. This is cumbersome, inefficient and still imperfect.
Predictability - is guaranteed in a functional language because you will always get the same output for the same input. It can be thought of as nothing more than a glorified switch statement!
No Side Effects - as there is not state! There are no variables in pure functional languages, only values. Once a value is created, it cannot be change. Values can be created local to functions, but that function is also immutable and cannot be changed. State is instead managed by passing values around as inputs and outputs. When you think about it, it doesn't make sense to want to change a value you have set. What makes sense is to simply set a new value. This is essentially what functions do.
What happens when we have no state? Reasoning becomes much simpler! Predictability makes programs easier to understand. Tests also becomes more meaningful for the same reason. However, your tests are still unable to tell you that bugs do not exist, and they are only meaningful for the inputs you are testing with.
Optimizing - once you realise that output will never change for a given input, you understand that it no longer matters what order functions are called in. This allows you to perform functions in parallel, such as map reduce. Lazy Evaluation also becomes possible, because you know it doesn't matter when you call a function - the result will always be the same for the give input. Caching is more widely permitted for the same reason. Also, as there are no side effects, it means that you can be sure nothing is going be changed from underneath you! This makes concurrent programs easier to manage, as you don't need to worry about race conditions and lock objects.
Programming Techniques
We strive to make it simple to create programs. We use technology, tools and techniques to make programs better and to make better quality programs. The most basic tool we have is our own reasoning. Other techniques include testing, pair programming & code reviews. We'll explore 2 of things.Reasoning - We reason about code when we are trying to infer what it will do. We reason to better understand what is going on. Debugging is the most tangible form of reasoning. Typically, we are following the code path and understanding the flow of data. In every program we write we have to reason about what the code is doing. In its most basic form, we are walking through the program in our minds.
Testing - while we can be sure that programs are always reasoned about - they are certainly not always tested. That is because testing is useful, but limited. Testing can only show you the presence of bugs, but never their absence. TDD is based around this fact. The Red; Green; Refactor; process highlights the presence of a bug as the code is not yet written to perform this task - so the test fails. Writing the code to make the test pass is the process of removing a bug. Just because a system has 100% code coverage it cannot be said to be 'bug free'.
Programming Difficulties
Programming is difficult largely because programs grow to be complex. Code volume also makes it difficult to maintain programs, but volume and complexity are not necessarily linked. Its very easy to create an overly complex program without applying thought to its design. Simplicity is much harder to achieve. We'll look at one source of complexity:State - object oriented languages idealise encapsulated state. OO languages are useful precisely because we use more code to manipulate an object's state. So state changes with time - known as entropy in the mathematical world. Most of the time, we are intentially changing state in a useful way. But state is also permitted to be changed by something outside our control (unless we are very careful). The code example below highlights what it means to have an object enter a 'bad state' and not have that communicated back to the calling code.
State Spoils Testing - how can you test an object in all its possible states? You can only test for the states you think the object might enter into. State increases exponentially, so the more potential state you create, the lower the meaning of your tests!
State Spoils Reasoning - state allows for unpredictability. You can never be sure that you will get the same output with the same input. An objects state can change, so how can you be sure of it at any given time? This becomes even worse when you consider concurrent programs. It is much harder to describe the control flow of a program concurrent stateful program. In OO you have to use Locks to be able to protect against unwanted state change. This is cumbersome, inefficient and still imperfect.
Functional Languages
What can they do to help? They idealise Referential Transparency. RT means dealing with inputs and outputs. A function just maps an input to an output.Predictability - is guaranteed in a functional language because you will always get the same output for the same input. It can be thought of as nothing more than a glorified switch statement!
No Side Effects - as there is not state! There are no variables in pure functional languages, only values. Once a value is created, it cannot be change. Values can be created local to functions, but that function is also immutable and cannot be changed. State is instead managed by passing values around as inputs and outputs. When you think about it, it doesn't make sense to want to change a value you have set. What makes sense is to simply set a new value. This is essentially what functions do.
State
What happens when we have no state? Reasoning becomes much simpler! Predictability makes programs easier to understand. Tests also becomes more meaningful for the same reason. However, your tests are still unable to tell you that bugs do not exist, and they are only meaningful for the inputs you are testing with.Optimizing - once you realise that output will never change for a given input, you understand that it no longer matters what order functions are called in. This allows you to perform functions in parallel, such as map reduce. Lazy Evaluation also becomes possible, because you know it doesn't matter when you call a function - the result will always be the same for the give input. Caching is more widely permitted for the same reason. Also, as there are no side effects, it means that you can be sure nothing is going be changed from underneath you! This makes concurrent programs easier to manage, as you don't need to worry about race conditions and lock objects.
Conclusion
The techniques we use to help delivery quality software are undermined by state. Functional Languages remove state from the programming paradigm and in doing so prevent an entire class of bugs from ever entering our code. Functional programming makes our code easier to understand because of its predictability and has greater potential to optimize.Wednesday 27 November 2013
Hotfix KB2842230 for WinRM on Windows 2012 (64-bit)
This post is just to document the location of a hotfix file that Microsoft Support fails to link to. They try to provide access from here: http://support.microsoft.com/kb/2842230. But the 'Hotfix Available' link doesn't provide access to the 64-bit version needed for Windows Server 2012.
http://hotfixv4.microsoft.com/Windows%208%20RTM/nosp/Fix452763/9200/free/463941_intl_x64_zip.exe
I found the link eventually in a GitHub Repository: Packer-Windows. So thanks for that! The hotfix now allows WinRM to respect the configuration value for "MaxMemoryPerShellMB" instead of always allowing using the default value of 150Mb.
This means I can get on with provisioning MVC4 and IIS etc with Chef. Hopefully more on that to come.
http://hotfixv4.microsoft.com/Windows%208%20RTM/nosp/Fix452763/9200/free/463941_intl_x64_zip.exe
I found the link eventually in a GitHub Repository: Packer-Windows. So thanks for that! The hotfix now allows WinRM to respect the configuration value for "MaxMemoryPerShellMB" instead of always allowing using the default value of 150Mb.
This means I can get on with provisioning MVC4 and IIS etc with Chef. Hopefully more on that to come.
Tuesday 26 November 2013
Quick How To: Entity Framework Auto Migrations
The requirement in the project is to store some tracking information in a SQL database. I have created a POCO out of basic value types to house the data. I want each instance of this object to be persisted as a row in a table: enter EF 6.0. Its as simple as ever to bring in the EF nuget package at the Package Manager console:
Install-Package EntityFramework -ProjectName <ProjectName>
Now we have the tools to set up a database context for the POCO. The database context will give us the ability to store, update, query and delete collections of objects - we just have the one for this example, so our data container, db context and tracking class all together look like this:
Then, again from the console enable migrations:
Enable-Migrations -ProjectName <ProjectName>
In the project you added EF to, find the .\Migrations\Configuration.cs class. Set AutomaticMigrationsEnabled to true in the constructor:
public Configuration()
{
AutomaticMigrationsEnabled = true;
}
Now we're all set to have EF automatically put our database together. It will create tables, stored procedures, primary and foreign keys based on the signature of our objects. Lastly, run this command from the Package Manager console:
Update-Database -Verbose -ProjectName "<Project.Name>" -ConnectionProviderName "System.Data.SqlClient" -ConnectionString "xxx"
This is same command that can be run when new entities are added to the context, or when new properties are added to an existing entity. Note that this command won't support the removal of a property by default as it will result in data loss. Column renames are supported, but beyond the scope of this blog. Watch this video tutorial for further instruction. And here for more examples of maintenance commands.
Install-Package EntityFramework -ProjectName <ProjectName>
Now we have the tools to set up a database context for the POCO. The database context will give us the ability to store, update, query and delete collections of objects - we just have the one for this example, so our data container, db context and tracking class all together look like this:
Then, again from the console enable migrations:
Enable-Migrations -ProjectName <ProjectName>
In the project you added EF to, find the .\Migrations\Configuration.cs class. Set AutomaticMigrationsEnabled to true in the constructor:
public Configuration()
{
AutomaticMigrationsEnabled = true;
}
Now we're all set to have EF automatically put our database together. It will create tables, stored procedures, primary and foreign keys based on the signature of our objects. Lastly, run this command from the Package Manager console:
Update-Database -Verbose -ProjectName "<Project.Name>" -ConnectionProviderName "System.Data.SqlClient" -ConnectionString "xxx"
This is same command that can be run when new entities are added to the context, or when new properties are added to an existing entity. Note that this command won't support the removal of a property by default as it will result in data loss. Column renames are supported, but beyond the scope of this blog. Watch this video tutorial for further instruction. And here for more examples of maintenance commands.
Subscribe to:
Posts (Atom)