Just for note keeping, I've written down some methods of reducing the size of a .NET Core application. I thought others could use it as well, so here you go. Original Size First off, let's see how much disk space a self-contained 'hello world' application takes up. > dotnet new console > dotnet publish -r win-x86 -c release Size: 53.9 MB - yuck! Trimming Microsoft has built a tool that finds unused assemblies and removes them from the distribution package. This is much needed since the 'netcoreapp2.0' profile is basically .NET Framework all over again, and it contains a truckload of assemblies our little 'hello world' application don't use. > dotnet new console > dotnet add package Microsoft.Packaging.Tools.Trimming -v 1.1.0-preview1-25818-01 > dotnet publish -r win-x86 -c release /p:TrimUnusedDependencies=true Size: 15.8 MB Much better! Although, we have only removed unused assemblies - what about unused...
I often find myself needing a compression library in a software development project, and after a few tests with disappointing results I usually fall back to spawning 7Zip as an external process. This time however, the application specification prohibits any data written to disk, which 7Zip needs to do in order to work correctly. I needed to do everything in memory, and so I had to test some of the C# libraries available for compression as well as the different compression algorithms. Here is a full list of the tested libraries: .NET DeflateStream ICSharpCode SharpZip Zlib.NET DotNetZip Managed LZMA LZMA SDK QuickLZ Compression 101 Before I get to the results of the benchmark, I think it is important to understand the different archive formats and compression algorithms in order to compare the results with other benchmarks/and or tests. Lets get to it! A compression algorithm can be compared to a small machine that takes in some data, do some math on it and...
One of the important algorithms in vector graphics and robotics is the Ramer-Douglas-Peucker algorithm. It simplifies a polygon by reducing the number of points by some tolerance. It was invented independently by Urs Ramer in 1972 and by David Douglas & Thomas Peucker in 1973. In 1992 it was improved by John Hershberger & Jack Snoeyink at the cost of the algorithm only working with simple 2D planar polylines, and not in higher dimentions. Here is how it roughly works: 0. Start with a polygon. 1. Make a polyline between the two endpoints. This is the initial approximation to the simplified polygon. 2. Calculate the distance between the edge and the remaining vertices. If the vertex tested is further away that the user defined tolerance, then the vertex is added to the simplification. 3. Repeat step 2 until you have a complete approximation of the polygon like shown in step 4 in the picture. The idea with the algorithm is that we can give it an input like t...
Comments
Post a Comment