tag:blogger.com,1999:blog-6028507671193008692024-02-02T10:14:42.274-05:00Web.NETBuilding modern Web on the cloud with advanced tools and frameworks.AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.comBlogger73125tag:blogger.com,1999:blog-602850767119300869.post-50535631964971886792023-04-08T16:23:00.001-04:002023-04-08T16:23:34.673-04:00Smart Home Concept and Choices<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1UIZJ-N00lRG_pN8mZFRQaZSO7_wEvBN9EK-rC7kJvXI3_oiQ4uU0tuPE_qgKVUeImMjBuEb3uES9gdy3ccCOQ0UW25SuwLvum22-SbYNcSCwrUaQZ4tR-J8VNlwh7Cp8PgJLaCjoaXg/s220/smart-home-logo.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="201" data-original-width="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1UIZJ-N00lRG_pN8mZFRQaZSO7_wEvBN9EK-rC7kJvXI3_oiQ4uU0tuPE_qgKVUeImMjBuEb3uES9gdy3ccCOQ0UW25SuwLvum22-SbYNcSCwrUaQZ4tR-J8VNlwh7Cp8PgJLaCjoaXg/s0/smart-home-logo.png" /></a></div><p style="text-align: left;">Smart Home is a thing for quite some time. Not just for enthusiasts but for pretty much everyone regardless of whether we realize that or not. Smart Home is not necessarily a standalone industry but its ideology spreads out very quickly and its elements get embedded in many areas of our life without our explicit consent. So it's not a question of "if" or "when" but rather how to get onboard and start riding the "smart home" wave to our benefits.</p><span><a name='more'></a></span><p></p><h3 style="text-align: left;">Concept</h3><p>Let's start from understanding what a "Smart Home" really means. I admit there are many various interpretations of the term and most of them are subjective as they are based on individual choices, usages, scenarios, etc. But I think we can really pick a few most important requirements and use them as a baseline that draws the border between the "Smart" and "Regular" home.</p><p>The first requirement I will call a "<b>controllable home</b>". This is a necessary requirement meaning without satisfying it there is no way of making a home smart. The name implies that the house is equipped with devices that the occupants use to manage the house conditions and that can be controlled remotely, without a direct physical contact. The most ubiquitous example of such a device is a Wi-Fi connected thermostat.</p><p>Another requirement I will call a "<b>connected home</b>". This requirement necessitates an interface between people that occupy the house and the controllable parts (devices, appliances, etc.) of the house. By interface I mean not physical controls that we've always had like buttons, knobs, levers, switches, etc. but a virtual interface that does not require a physical presence of a human in front of a device being controlled nor in the house itself. Since we live in the wireless Internet era most of the homes would have a Wi-Fi local network connected to the Internet and people would have specific mobile phone apps to interact with the smart devices. The same Wi-Fi connected thermostat with a mobile app is a perfect example. Another one is a Wi-Fi connected vacuum like Roomba.</p><p>And finally is the requirement that makes a home actually "<b>smart</b>" by giving it an ability to perform set routines and react on specific events in a way that make occupants' lives easier and more comfortable. In a general sense, it's a computer that is connected to the home's Wi-Fi network and can interact with its household devices without a human involvement and at the same time provide a centralized point of access for the people to see what's going on and make necessary adjustments. Using the previous examples, a thermostat can automatically change the house's temperature and a vacuum can clean floors automatically on schedule, which makes those devices somewhat smart.</p><h3 style="text-align: left;">Terminology</h3><p>In its current state, the "Smart Home" paradigm has already evolved enough to establish its own terminology. Let's get familiar with some terms to ease into future conversations.</p><p>Firstly, all the household devices that can participate in a smart home ecosystem are called "smart devices". No surprise here. All the home improvement stores that sell such devices use this term as well.</p><p>Next, the combination of networking media that connects all household smart devices into a single micro world is called a smart home network. Again, just common sense, but this requires a bit more elaboration because there are various connectivity technologies that are currently in use in the smart home space and it is important to know their names.</p><p>The most well recognized is <b>Wi-Fi</b>. It's also the most popular and widely used. The others are way less popular and may sound weird but let's mention them for the information sake. Without a specific order there are: <b>Z-Wave</b>, <b>ZigBee </b>and <b>Matter</b>. All of them are wireless and important to recognize when purchasing a smart device that uses one of those connectivity technologies. In general, if in doubt whether your smart home supports one or another connectivity option choose the one you know of for sure, most likely Wi-Fi.</p><p>Finally, the "brains" of the smart home is usually called a "smart home controller" or a "smart home hub" implying that it's a some kind of center of operation, which is true to certain degree. Let's just call it a "<b>Smart Hub</b>" as a generic name. A few popular choices are Samsung SmartThings hub, Amazon Alexa or Google Assistant hubs.</p><p>Important to mention, that in some cases the Smart Hub may not be physically present in a house and could be a virtual software running somewhere in a "cloud" owned by a third party vendor and interfaced via a mobile app. Such way of controlling a smart home would require continuous Internet access. Common examples, again, are Amazon Alexa and Google Assistant.</p><h3 style="text-align: left;">Choices</h3><p>If you've read this far and not feeling overwhelmed yet, this is where the excitement will start building up. Because choices are so many they may make an unfamiliar person to simply give up and forget about the smart home idea. But fear not. There are a few simple criteria that would allow anyone to filter out most of the noise and focus on what works well for them.</p><h4 style="text-align: left;">First Choice</h4><p>Would you prefer to <u>control your smart home locally or delegate it to a third party?</u> There are pros and cons for either, but knowing which one you have and understanding the implications is important.</p><p>An example of a locally controlled smart home would be a computer in your local Wi-Fi network running 24/7 with a software like open source and free <i>Home Assistant</i> or <i>openHAB</i>. Disclaimer, I provide no links to any mentioned products here because it's not the purpose of the post.</p><p>Examples of a delegated control of a smart home ecosystem are the ubiquitous Amazon Alexa and Google Assistant but there are many more other products on the market as well.</p><p>A good question to ask here is whether both local and delegated control can be combined together in a smart home setup. The answer is tricky as it's kind of yes and no. What that means is technically it's possible but practically makes no sense as both approaches contradict each other in so many areas.</p><p>To put it simply, <u>choose Locally Controlled Smart Home</u> if you are:</p><p></p><ul style="text-align: left;"><li>Like to be in charge</li><li>Concerned about security and privacy</li><li>Not afraid of tinkering with technology and dealing with a lot of technical information</li><li>Feel proud of being a builder and an owner of a smart home</li></ul><p></p><p><u>Choose to Delegate Control of your Smart Home</u> if you:</p><p></p><ul style="text-align: left;"><li>Prefer to plug and forget</li><li>Like simplicity over complexity</li><li>Trust marketing promises of smart home vendors</li><li>May feel overwhelmed with large amount of technical details</li></ul><p></p><h4 style="text-align: left;">Second Choice</h4><p>What kind of networking will you be using for your smart home? Most likely, Wi-Fi is what many people already have in their home so it's a given. Will your smart home need to support any other technologies? Answer to that depends on additional factors like:</p><p></p><ul style="text-align: left;"><li>How many smart devices you plan to use and how large a territory they will be installed on. Wi-Fi, for example, being popular and available has limitations on a number of connected devices and territory coverage.</li><li>Types of device you'd like to use. Some of specific device types may not support certain connectivity technologies including Wi-Fi. For example, it's not easy to find a Wi-Fi connected home energy meter but a Z-Wave connected is easily available.</li></ul><div>The most practical approach is to <u>start from what you already have, most likely Wi-Fi</u>, and then add another technology when a need arises.</div><p></p><h4 style="text-align: left;">Third choice</h4><p>Which brands of devices should one to choose for their smart home? There are so many now, it's overwhelming. However, common sense dictates to:</p><p></p><ul style="text-align: left;"><li>Choose functionality and quality over price. You get what you pay for.</li><li>Choose compatibility first and don't jump to creating a zoo of incompatible technologies. It's relatively easy to keep track of your smart home choices and verify that all are satisfied when adding another smart device.</li><li>Make informed decisions. Don't rush to spend money if you don't understand what exactly the benefits and usage scenarios are. Do a research first.</li><li>Educate yourself about smart technologies. The time you invest in your own knowledge will pay off many times after you spent your money and realize that your decision was right.</li></ul><h2 style="text-align: left;">Conclusion</h2><div>Hopefully, this introduction into Smart Home world will be useful to readers who are interested in beginning their journey into the Smart Technologies but not sure where and how to start. Feel free to comment below about your thoughts and experiences. </div><p></p><p><br /></p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-18888211205546813932023-03-06T15:00:00.003-05:002023-03-15T19:39:15.731-04:00When Your CyberPower UPS is Out of Warranty You are Out of Luck<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixf4zrF40XQeY6udEU35sSYeXKplFpdDT2_-WUDNnnkjBNcHVHvIy7kl1cREQtC4SBy8QeS0mwcWC64nZATKixuo7Z5zl0r1L_ogPTVdPMe8SazmqoDLzzrUk-mwPC9mVDfJiRtp49A1usR9Okr5q_sb8hNqVBzyiNNFCMtCPlKDDF3vou048fvJTt/s200/battery-no-charge-icon.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="200" data-original-width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixf4zrF40XQeY6udEU35sSYeXKplFpdDT2_-WUDNnnkjBNcHVHvIy7kl1cREQtC4SBy8QeS0mwcWC64nZATKixuo7Z5zl0r1L_ogPTVdPMe8SazmqoDLzzrUk-mwPC9mVDfJiRtp49A1usR9Okr5q_sb8hNqVBzyiNNFCMtCPlKDDF3vou048fvJTt/s16000/battery-no-charge-icon.png" /></a></div><br />According to CyberPower support when your CP UPS is out of warranty your only options are:<p></p><ol style="text-align: left;"><li>Recycle and replace the UPS. It's totally up to you if you want to stick with the CyberPower products of switch away in hope of better luck next time.</li><li>It may help in some cases to try and replace a battery that you can only purchase from a third party supplier.</li></ol><p>How would you know if purchasing a battery might help revive your UPS? Well, you'll need to be brave and open it up and do some research. If you don't feel like digging in the guts of an electrical device, this options is not for you and you may stop reading and go on with the option # 1 in the above list.</p><span><a name='more'></a></span><p>For the brave and technically savvy souls keep reading.</p><p>The only case when it does make sense to replace a battery is when the battery has gone bad but everything else in the UPS works properly. Specifically, what should be working is the battery charging. If the charging does not work, the UPS is out of comission, see the option # 1 in the above list.</p><p>To figure out if the charging still works, open up the ups by removing the screws on the back and separating the case in half. <b><span style="color: red;">Make sure the UPS is disconnected from the mains power.</span></b> This will give you access to the battery. Locate the battery's terminals and measure the voltage with a multimeter.</p><span></span><span><!--more--></span><p><span style="background-color: #ffe599; color: #783f04;">Remember, the battery is DC and it has voltage indicated on the side so set your multimeter accordingly. In my cases I only dealt with the 12V batteries but I've read that it could also be 6V or more than 12V, so make sure.</span></p><span><!--more--></span><p>If the the battery's voltage is significantly lower than nominal voltage, for example ~3V for a 12V battery, chances are the <u>battery is dead</u> and you may want to explore option #2 in the list above (after verifying that the charging is working). If its voltage is lower than it should be but still close enough to nominal the chances are the <u>charging IS NOT WORKING</u>.</p><p style="-webkit-text-stroke-width: 0px; color: black; font-family: "Times New Roman"; font-size: medium; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px;"></p><p></p><p><span style="color: red;">Now, the next step is a bit more elaborate and could be dangerous. A disclaimer, that if you execute steps below, you do that entirely on your own risk and you take full responsibility for anything that may happen during that execution.</span></p><p>In short, you need to measure the voltage on battery wires (not on battery terminals) when the UPS is connected to the mains power. You need the battery be disconnected while you measure the voltage. If the voltage is slighly above the battery's nominal voltage, the charging IS WORKING and you can go ahead and replace a battery if you wish so. Otherwise, the charging IS NOT WORKING and your only option is # 1 in the list above. For example, for a 12V battery the charging voltage is expected to be about 14V. If it's considerably lower or close to zero, it means there is no charging.</p><p><span style="background-color: #ffe599; color: #783f04;"><span></span></span></p><!--more--><span style="background-color: #ffe599;"><span style="color: #783f04;">In some UPS models the charging system detects a bad battery by its low voltage and does not turn on. In such scenarios the only way to verify that the UPS's electronics is in working condition is by attaching a good battery. If you have an available rechargable battery of the same chemistry and nominal voltage in good condition you might consider using it for testing the UPS on your own risk.</span></span><span><!--more--></span><p></p><p>Thanks for reading and hope that was helpful. If not, my apology for have wasted your time.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-2850738468519188072021-04-15T14:11:00.000-04:002021-04-15T14:11:20.983-04:00dotPeek: a Totally Free Alternative for .NET Reflector<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAa5llFdGh-3ND2DIyry6hADin-_rbCpNMWh1ovzyYEmtSjUbu8fmFZvuqtjwT2nLX9QHqYmYIO6bUU-ETDDC-WZBJcDngWBo4MtRHg5yaiafxpTSboREb52aJYCe_wMTe8JQ85koIzoQ/s200/jb-dotpeek-icon.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="200" data-original-width="200" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAa5llFdGh-3ND2DIyry6hADin-_rbCpNMWh1ovzyYEmtSjUbu8fmFZvuqtjwT2nLX9QHqYmYIO6bUU-ETDDC-WZBJcDngWBo4MtRHg5yaiafxpTSboREb52aJYCe_wMTe8JQ85koIzoQ/w200-h200/jb-dotpeek-icon.png" width="200" /></a></div>As a professional .NET developer you recognize the value of a .NET decompiler. There are many scenarios when the original source code is not available but requires an understanding and using a decompiler is definitely one of the best ways to look at the source code. One of the advantages working with a managed framework like .NET is an ability for a decompiler to convert an assembly into an code of a high level programming language like C# instead of a machine code. And this is exactly an expertise in which the dotPeek shines.<span><a name='more'></a></span><p></p><p>One of the pioneers in the field and most favourite .NET decompilers is a <a href="/2011/02/net-reflector-is-becoming-paid-for.html" target="_blank">.NET Reflector</a>. About ten years ago it was acquired from its original author, that offered it to the development community for free, and became a paid product. I am not discussing here how the today's modern paid version of .NET Reflector compares to the original free one. Instead I want to bring developers' attention to another mighty .NET decompiler that has been on the market for many years and is completely free. Let's welcome<b> JetBrains dotPeek</b>.</p><p>Coming from a trusted name in the industry, JetBrains, an author of a famous Resharper, and built on the same technologies, dotPeek, in my opinion, is a must have tool for every .NET developer. It's a standalone Windows program that provides a well thought environment very familiar to Resharper users, that allows easily navigating any .NET assemblies in a form of C# code with the same level of speed and intelligence as if one was working with the original C# source code inside Visual Studio.</p><p>Do I need to say more? I don't think so. If you have been a .NET Reflector user and stopped using it after it changed ownership you owe it to yourself to take a look at the dotPeek. If you never used a .NET decompiler before, read above.</p><p>As a disclaimer, I am in no way associated with JetBrains, and all above is just my private opinion based on my personal many years long experience.</p><p>Happy decompiling.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-40980616458720228932021-04-09T19:03:00.000-04:002021-04-09T19:03:34.346-04:00Problems Migrating From SQL Server Management Studio (SSMS) v17 to v18<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUsEF-s13mxrpBa6j8qpEFwVEfN0k54L98U4qEEbPxwRFkUxKqWBUgU3ceQwz0mf87pfbmwm0n0MBZ2zjf1z3EiLatMDsUZz_FtQTOVUDJ0RMCtx2LbXAvziwmKrYfpmgTI-k1OEdbHdg/s623/ssms_18_error.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="351" data-original-width="623" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUsEF-s13mxrpBa6j8qpEFwVEfN0k54L98U4qEEbPxwRFkUxKqWBUgU3ceQwz0mf87pfbmwm0n0MBZ2zjf1z3EiLatMDsUZz_FtQTOVUDJ0RMCtx2LbXAvziwmKrYfpmgTI-k1OEdbHdg/w320-h181/ssms_18_error.png" width="320" /></a></div>If you are an active multi-year user of Microsoft SQL Server Management Studio you are probably well-used to a controversial history of this product as an arguably one of the most loved and hated at the same time and one of the most outdated as well. Understandably, with so many different active versions of Microsoft SQL Server the requirements for this product are quite challenging but the users (us) continue to vote for this tool to exist and push Microsoft to keep updating it regardless of many other alternatives available on the market.<span><a name='more'></a></span><p></p><p>Just recently, I've finally made efforts to install the most recent version SSMS v18.8 while still using the most stable previous version v17.9. As I have expected (with a false hope that my expectation would not be true) the installation of v18.8 was not smooth, so I want to share some tips to make sure it works.</p><p>The reason that I have avoided v18 for so long well after it was released is because of many unexpected problems popping up every single time I tried to install it. So, I kept my trustful SSMS v17.9 and waited for the next 18.x to be released to unsuccessfully try it out. The secondary reason was that seemingly no new worth mentioning features have been released in v18.x so I did not have strong motivation to make an effort.</p><p>I have personally tried many alternative SQL IDEs, namely Azure Data Studio, JetBrains DataGrip, SQL extension for VS Code, SQL project in Visual Studio, Apex SQL and more, but I keep returning to SSMS as arguably a more convenient and familiar environment. If you are in the same shoes, keep reading.</p><h3 style="text-align: left;">Installation Problem</h3><p>If you have/had a previous version of SSMS installed on your machine (which is most likely, if you are still reading) chances are high you've got that error (picture above) during the installation. As you can see, it clearly says that the problem is with Microsoft ODBC Driver 17 for SQL. To fix it, open Programs in your Windows Settings and uninstall the ODBC Driver 17. Important: you'll need to reboot Windows after uninstalling it.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpYLFQfn5HzWILopw8yuhkD68eE77txoc_nDFzeTeHE_7AUQmgdEN4sdgpZ-F1l4eS_8aZY-Z2TYR6jIM4YMxXEtySMsAURs0XRwEY5txJo1tKuyjgSlTalOB1Al9oFBkSM3rshXlGtQg/s453/sql_odbc_17.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="241" data-original-width="453" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpYLFQfn5HzWILopw8yuhkD68eE77txoc_nDFzeTeHE_7AUQmgdEN4sdgpZ-F1l4eS_8aZY-Z2TYR6jIM4YMxXEtySMsAURs0XRwEY5txJo1tKuyjgSlTalOB1Al9oFBkSM3rshXlGtQg/s320/sql_odbc_17.png" width="320" /></a></div><p>After uninstalling and rebooting the installation of v18.8 completed successfully.</p><h3 style="text-align: left;">Password Remember Problem</h3><p>After opening the new SSMS 18 first time I've noticed that it does not remember the passwords to connected databases despite of multiple attempts. The reason, as you can find on multiple online user forums, is two-fold: automatic silent migration of user settings from a previous installed version of SSMS (if it exists) and SSMS switching to using Windows Credentials Manager for password storage.</p><p>To fix the problem, you'll need to remove the currently stored credentials and also the SSMS 18 user settings folder. Make sure SSMS is not running while doing that. After the clean-up and starting SSMS it will ask to import user setting from an older SSMS so please don't do that. After that saving passwords starts working again. SSMS user settings folder is located at c:\users\[USER NAME]\AppData\Roaming\Microsoft\SQL Server Management Studio\18.0.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-rrcp0JPhqCgwXSvNyirn4SkU5CLs1T480s25aI1CxdfX2BbQEMfdyeWBAojDZnPOggPXJYzpYOPMQwM8Ci_cwmaeMdXNLSCKBR0pLnDwgPzpzAS8fnzE7c45fdsbmwz1FXOhy14phys/s330/ssms_cred.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="167" data-original-width="330" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-rrcp0JPhqCgwXSvNyirn4SkU5CLs1T480s25aI1CxdfX2BbQEMfdyeWBAojDZnPOggPXJYzpYOPMQwM8Ci_cwmaeMdXNLSCKBR0pLnDwgPzpzAS8fnzE7c45fdsbmwz1FXOhy14phys/s320/ssms_cred.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrsoIrh1eVk_QJRSc3_3th2XLG_KBGUdbpFHX4FcxA7ZAFUrhKpcFbrnZ0Tw0Jnijs8EkocUgYN2ubK-XTzNBuf-bUCwQDIbUqa_nbCHgVEiU72HDYDN3DjbSSdncqT8sznUgaSWlmit8/s367/ssms_folder.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="120" data-original-width="367" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrsoIrh1eVk_QJRSc3_3th2XLG_KBGUdbpFHX4FcxA7ZAFUrhKpcFbrnZ0Tw0Jnijs8EkocUgYN2ubK-XTzNBuf-bUCwQDIbUqa_nbCHgVEiU72HDYDN3DjbSSdncqT8sznUgaSWlmit8/s320/ssms_folder.png" width="320" /></a></div><h3 style="text-align: left;">Extensions Loading Problem</h3><p>If you had some extensions installed for a previous version of SSMS you may see a start-up error "Error loading extension ...". Seemingly, this error won't affect how SSMS works but it may be annoying. Unfortunately, there is nothing in SSMS UI that can deal with extensions, so just navigate to an extensions folder C:\Program Files (x86)\Microsoft SQL Server Management Studio 18\Common7\IDE\Extensions and remove offensive folders that don't belong there.</p><p>Hopefully, these tips above are helpful. Don't hesitate to leave comments if you have different experience.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-58649680579250218782021-03-31T13:30:00.076-04:002021-04-15T17:01:35.728-04:00Reorder Columns in Spark DataFrame with Select() and Scala<script crossorigin="anonymous" src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.23.0/components/prism-scala.min.js"></script>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfOe_aW1rI5Z1WQGE-YGX4sc4IvK2klTmNbmsaZNGh1l6qHERH09XnMkO068CM28btB8jY6leNcfL30cfe3x7RBuFDDV1u2J776G1xco_8W9zrfjSy-Y57ZMbeFVdsPhMGaEfwJVX3zdw/s220/spark%252Bscala_logo.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="220" data-original-width="220" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfOe_aW1rI5Z1WQGE-YGX4sc4IvK2klTmNbmsaZNGh1l6qHERH09XnMkO068CM28btB8jY6leNcfL30cfe3x7RBuFDDV1u2J776G1xco_8W9zrfjSy-Y57ZMbeFVdsPhMGaEfwJVX3zdw/w200-h200/spark%252Bscala_logo.png" width="200" /></a></div><br /><p>Sometimes, when dealing with a Spark data frame it may be necessary to reorder columns in a certain order. For example, to keep data consistent when trying to union two or more data frames with the same schema but different order of columns. Let's explore how that can be done relatively easy using a <code>select()</code> Spark SQL function.<span></span></p><a name='more'></a><p></p><p>Let's start from creating data frames for our example. We will create two data frames with the same list of columns but in a different order.</p>
<pre class="language-scala"><code>val ny = Seq(
("1 Main St","Buffalo","NY",14200),
("1 Church St","Lockport","NY",14090),
("1 Madison St","Rochester","NY",14600));
val pa = Seq(
("Lancaster","PA","1 Main Ave",17600),
("Philadelphia","PA","1 Church Blvd",19100),
("Harrisburg","PA","1 Lincoln St",17100));
val nyDf = ny.toDF("Street","City","State","Zip");
val paDf = pa.toDF("City","State","Street","Zip");
nyDf.printSchema()
nyDf.show()
paDf.printSchema()
paDf.show()
</code></pre>
<p>The code above when executed yields the following output:</p>
<pre class="lang-text"><code>root
|-- Street: string (nullable = true)
|-- City: string (nullable = true)
|-- State: string (nullable = true)
|-- Zip: integer (nullable = false)
+------------+---------+-----+-----+
| Street| City|State| Zip|
+------------+---------+-----+-----+
| 1 Main St| Buffalo| NY|14200|
| 1 Church St| Lockport| NY|14090|
|1 Madison St|Rochester| NY|14600|
+------------+---------+-----+-----+
root
|-- City: string (nullable = true)
|-- State: string (nullable = true)
|-- Street: string (nullable = true)
|-- Zip: integer (nullable = false)
+------------+-----+-------------+-----+
| City|State| Street| Zip|
+------------+-----+-------------+-----+
| Lancaster| PA| 1 Main Ave|17600|
|Philadelphia| PA|1 Church Blvd|19100|
| Harrisburg| PA| 1 Lincoln St|17100|
+------------+-----+-------------+-----+
</code></pre>
<p>As you can see, both data frames, <code>nyDf</code> and <code>paDf</code>, contain similar data with the same column names but in a slightly different order. Let's combine our two data frames into a single one using a <code>union()</code> function. If the schemas of two data frames are different the <code>union()</code> function should produce an error.</p>
<pre class="language-clike"><code>val allDf = nyDf.union(paDf);
allDf.printSchema()
allDf.show()
</code></pre>
<p>After the code is executed we see the output below. The <code>union()</code> executed successfully without an error, meaning that it considers the schemas of two data frames to be the same even though the columns are in a different order.</p>
<pre class="lang-text"><code>root
|-- Street: string (nullable = true)
|-- City: string (nullable = true)
|-- State: string (nullable = true)
|-- Zip: integer (nullable = false)
+------------+---------+-------------+-----+
| Street| City| State| Zip|
+------------+---------+-------------+-----+
| 1 Main St| Buffalo| NY|14200|
| 1 Church St| Lockport| NY|14090|
|1 Madison St|Rochester| NY|14600|
| Lancaster| PA| 1 Main Ave|17600|
|Philadelphia| PA|1 Church Blvd|19100|
| Harrisburg| PA| 1 Lincoln St|17100|
+------------+---------+-------------+-----+
</code></pre>
<p>But we have a problem. You can see that the schema of the combined data frame was inferred from the first source data frame, but the second data frame was not re-ordered and the data is placed incorrectly. We need to fix this problem so in the combined data frame, all the data is located in the correct columns. Our solution is to re-order columns in the second data frame so they match the order of the first data frame before the union. For this, we'll modify a union code, to reorder the columns in the second data frame using a <code>select()</code> function:</p>
<pre class="language-clike"><code>val allDf = nyDf.union(paDf.select("Street","City","State","Zip"));
allDf.printSchema()
allDf.show()
</code></pre>
<p>After the updated code runs we see the new output below. The data in the combined data frame now appears in the correct columns.</p>
<pre class="lang-text"><code>root
|-- Street: string (nullable = true)
|-- City: string (nullable = true)
|-- State: string (nullable = true)
|-- Zip: integer (nullable = false)
+-------------+------------+-----+-----+
| Street| City|State| Zip|
+-------------+------------+-----+-----+
| 1 Main St| Buffalo| NY|14200|
| 1 Church St| Lockport| NY|14090|
| 1 Madison St| Rochester| NY|14600|
| 1 Main Ave| Lancaster| PA|17600|
|1 Church Blvd|Philadelphia| PA|19100|
| 1 Lincoln St| Harrisburg| PA|17100|
+-------------+------------+-----+-----+
</code></pre>
<p>If we want a more generic version of reordering the columns without hard-coding the column names we could use the <code>columns</code> property of the <code>DataFrame</code> to get the list of the column names in the correct order:</p>
<pre class="language-clike"><code>val allDf = nyDf.union(paDf.select(nyDf.columns.toSeq.map(c => col(c)):_*));
allDf.printSchema()
allDf.show()
</code></pre>
<h4 style="text-align: left;">Conclusion</h4><p>We have learned how to use a <code>select()</code> function to reorder columns in a data frame and use that technique to combine two or more data frames with the same schemas but different order of columns.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-27422618894860469232021-03-18T21:01:00.006-04:002021-03-19T12:06:11.013-04:00Incorrect ReSharper Behavior for CS1591 Warning<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1X650tp1UOioYT9KupxDE5Vfvz23-UEWHXq8hOIK_jtFB9Fl1fAI8NO90uM-ZKcxTiJgJfArxI1XkAM65dQj0PqTSYolXmWXxxqzclOONa_Eazz5SMu7MhhPFHZCGsss9CVT3IfwZDuw/s610/resharper_error_cs1591.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="196" data-original-width="610" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1X650tp1UOioYT9KupxDE5Vfvz23-UEWHXq8hOIK_jtFB9Fl1fAI8NO90uM-ZKcxTiJgJfArxI1XkAM65dQj0PqTSYolXmWXxxqzclOONa_Eazz5SMu7MhhPFHZCGsss9CVT3IfwZDuw/s16000/resharper_error_cs1591.png" /></a></div><p>After upgrading from Visual Studio 16.8.x to 16.9.x the JetBrains ReSharper suddenly started marking a lot of public identifiers in the code as compiler errors with a message: "Missing XML comment for publicly visible type or member...". You can imagine how that can be very annoying.</p><p><span></span></p><a name='more'></a><p></p><div style="text-align: center;"></div><p>So, you opened your code and now there are a lot of red ReSharper marks all over the place that weren't there before. The most annoying part is those marks are not warnings but actual errors. Is that right?</p><p>To be sure, let's look at the Visual Studio's Error List panel but there is nothing showing there related to the errors outlined by the ReSharper, so it's definitely a ReSharper's problem.</p><p style="text-align: center;"><img border="0" data-original-height="58" data-original-width="455" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUpKOodhKm5p2K09snlYqDRNhcK0WHIVthJEFhzLYDchqvKyzKJ2ACUvvAe3VLcWYHTeuDs1OO9woPqd04fbv3ZTGoj5XBvLDdqW2-3IuESfIRGXxx8xUlQPeUdF9nT_kKEO8RW7lduRs/s16000/vs_errors_cs1591.png" /></p><p>Upon inspection of ReSharper properties a specific setting responsible for a ReSharper reaction still correctly shows the "warning" level, yet the ReSharper insists that it's an actual error and has to be fixed.</p>
<p><img border="0" data-original-height="100" data-original-width="735" height="92" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoq-hdG2NcZzu_pfuXZ6KWSd09rXXZGwLxJ_KWXvoMuAF7k4sUCL0Pnb9LH2J4Gh2pNPUcjmZDKqjj-_5fZgh29luL44ttZGgWM1cFqSsEITbp8L6-djWGdtxPw5v_9Sqw10xNfZmfTkc/w670-h92/resharper_setting_cs1591.png" width="670" /></p>
<p>Well, what do we need to do to get rid of this annoying false positive ReSharper's error? First, let's understand why all this is happening. Actually, there are a few reasons for the ReSharper's reaction:</p><p>1. The project file .csproj includes a setting to generate an XML documentation file during a compilation. This setting is responsible for a compiler warning CS1591.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1sqmvsZygl6atBeUkVi1Hqx0zvIo9_8180o5sFHlV4MvXwBSc0r_m_qZrGoEg9aNxdEJ4_PIDQQrZ-LM9VGga4Yhxj8Uwwe3LCKWvt5v-jYJ67gsmahnl79LwyRweoG8d6TJIX6mOBO0/s204/vs_prj_doc_settings.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="91" data-original-width="204" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1sqmvsZygl6atBeUkVi1Hqx0zvIo9_8180o5sFHlV4MvXwBSc0r_m_qZrGoEg9aNxdEJ4_PIDQQrZ-LM9VGga4Yhxj8Uwwe3LCKWvt5v-jYJ67gsmahnl79LwyRweoG8d6TJIX6mOBO0/s0/vs_prj_doc_settings.png" /></a></div><p>2. However, that setting alone should not have triggered the ReSharper to show an error because it's own level is set to warning. Therefore, there must be something else, and yes, there is: apparently another project setting "Warnings as errors" is set to "All".</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgh0InSwmtIEi1mfXt-E1mheKAghp1_GdKcoaV0i8edHiQcYsiPASTMLDJzpyZEat0KdOfcx6LetMD2085NJ9wNgdMc8m7iaOGkaGHVpvktXnzeAEBkxuGoLNLhFI4rGXbf6r1lLMxriQ8/s167/vs_prj_settings.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="110" data-original-width="167" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgh0InSwmtIEi1mfXt-E1mheKAghp1_GdKcoaV0i8edHiQcYsiPASTMLDJzpyZEat0KdOfcx6LetMD2085NJ9wNgdMc8m7iaOGkaGHVpvktXnzeAEBkxuGoLNLhFI4rGXbf6r1lLMxriQ8/s0/vs_prj_settings.png" /></a></div><p>Now, these two settings together make ReSharper behave this way and ignore its own "warning" level setting. Obviously, with the Visual Studio upgrade the project settings and the compiler behavior did not change but the ReSharper behavior did. So, you make your conclusions yourself.</p><p>Now, what are the options?</p><p>One, if we <u>uncheck the ReSharper option</u> "Missing XML comment..." the warning/error will be entirely gone.</p><p>The same will happen if we <u>uncheck the "XML documentation file"</u> option in the project properties, and this is another option.</p><p>If, however, we want to keep being informed about potential mishandling of the XmlDoc comments, one more option is to just <u>change the "Treat warnings as errors" option</u> in the project properties to either "None" or "Specific" and the ReSharper's error will turn into an expected "warning".</p><p>Finally, we've learned how to control the behavior of multiple components to achieve a preferred outcome.</p><p></p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-59047226544649599362021-02-03T18:48:00.000-05:002021-02-03T18:48:11.393-05:00.NET Web API: Confusing 404 Response to OPTIONS Request for CORS pre-flight<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjG9VmjwIQyECjPyDhlt7kf9QWpT3c0sYX2MHYcTk0ZLtA0v1tWso1mMR0I93ZdiXCNx7tetGpirrlyRzlJ6lvzXlHWP8J3_1sOAgdkQQo9PlfwZKGXQjoLxvd3nUKlFL77Khu7Z_pJI0E/s150/quick-tip.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="150" data-original-width="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjG9VmjwIQyECjPyDhlt7kf9QWpT3c0sYX2MHYcTk0ZLtA0v1tWso1mMR0I93ZdiXCNx7tetGpirrlyRzlJ6lvzXlHWP8J3_1sOAgdkQQo9PlfwZKGXQjoLxvd3nUKlFL77Khu7Z_pJI0E/s16000/quick-tip.png" /></a><p>Problem: when a Angular applications sends an OPTIONS request to its own backend to satisfy CORS preflight policy it unexpectedly receives a 404 response from a .NET Core Web API application that effectively blocks an actual data data request from executing correctly. If you have ever experienced this problem, continue reading to understand what the problem is and how to fix it. <span></span></p><a name='more'></a><p></p><span></span><p></p><p>First, let's understand why this situation even occurs.</p><p>Modern web application extensively utilize frontend/backend communication and often need to send requests to external (residing on a different domain) endpoints. Such type of communication is known as Cross Origin Resource Sharing (CORS) and is handled by a namesake protocol that is an extension of a standard HTTP protocol. To help facilitate this scenario, modern browsers and web frameworks, like Angular, provide with built-in CORS support to make developers' lives easier without compromising application security.</p><p>In a CORS scenario any frontend to backend data request is accompanied with a preliminary OPTIONS request which is also known as a pre-flight request. Remember, that scenario only occurs when a client app and a backend app are deployed on different domains (origins). If it's not the case an explanation below does not apply.</p><p>Now, when an OPTION request is sent (silently, developers don't need to write any code for that), the client side app expects a successful response back that includes related CORS information and can decide whether to continue with an actual data request. In order for this communication to work correctly, the backend application has to be configured to support CORS OPTIONS requests as well.</p><p>When the backend is implemented using .NET Web API framework CORS support can be added easily and should work correctly and automatically:</p>
<pre class="lang-clike"><code>public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddPolicy(CorsPolicy, builder => builder
.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader()
);
});
...
}
</code></pre>
<p>Important to note, that the OPTIONS pre-flight request is expected to receive a successful response otherwise the next actual data request will not be sent.</p><p>Now that we understand what happens on the wire, <b>here is where the puzzle is</b>: <i>the OPTIONS request is sent by the client app but the backend responds with 404 which indicates incorrect request URL</i>.</p><p>But that makes no sense: the URL is exactly the same as for the follow-up data request and it does exist in the code. <b>What is happening?</b></p><p>After fiddling for a while with varies developer's tools it became apparent that the <u>returned code 404 is wrong</u>: instead, <u>400 (Bad Request)</u> should actually be returned because in this specific scenario the query string of the request URL was too long and exceeded allowed maximum length.</p><p>The challenge of diagnosing this problem was that client application was not helpful as it would always send an OPTIONS request first that would fail with 404 and the Web API actual response (400) was never returned. So instead, a <u>non-browser HTTP client</u> was used to send the request without the OPTIONS that returned 400 with the correct explanation.</p><p>After increasing the maximum length of the query string in the backend's web.config the problem was solved:</p>
<pre class="lang-xml" style="text-align: left;"><code><requestLimits maxQueryString="2048" maxUrl="4096" />
</code></pre>
<h3 style="text-align: left;">Conclusion</h3><p>Errors always happen in software development and need to be diagnosed and fixed. When the base framework that developers rely on give incorrect diagnostics it makes it more difficult to analyze and fix the problem. It is important to understand the expected behavior and use the right tools to diagnose the problem.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-34063606555939024472020-12-26T21:35:00.003-05:002021-02-04T11:29:28.916-05:00C#/.NET: Validate Anonymous Objects During Unit Testing<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi8ytj9sW7FhL_uPUFke2OVe5DbDg4r_EHkOO_3enp8FkbCpwZZSvWIE2Hqkknh8wuBsTYe0UIxpl527CZIiYcDIW5h4-WYjoKDEbCXMLU1_h-7LcT6Zw4_UPSSLzGEV7wz7imJS_TAag/s300/c-sharp-logo.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="300" data-original-width="267" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi8ytj9sW7FhL_uPUFke2OVe5DbDg4r_EHkOO_3enp8FkbCpwZZSvWIE2Hqkknh8wuBsTYe0UIxpl527CZIiYcDIW5h4-WYjoKDEbCXMLU1_h-7LcT6Zw4_UPSSLzGEV7wz7imJS_TAag/w178-h200/c-sharp-logo.png" width="178" /></a></div><p>Say, we have developed a REST API based on .NET Web API framework and need to write a unit test for an end point that returns a collection of objects. This sounds like a usual trivial task that many developers do all the time. Unless, the API method that we need to test, returns a collection of anonymous objects, that is the API method is written using a C# anonymous or dynamic type to construct objects that are included in the list.</p><p>It's totally fine when the endpoint is consumed over HTTP as the result is first serialized into JSON for transfer and then de-serialized back into a JavaScript object on a consumer side. If, however, the endpoint is called directly in C# without JSON conversion, like in a unit test, it's an entirely different story. Let's discuss how we can overcome this issue.</p><span></span><a name='more'></a><p>The API method method code could look like this:</p>
<pre class="lang-clike"><code>var rows = dbContext.Rows.Select(row =>
new { Name = row.NameColumn, Type = row.TypecColumn });
return Ok(rows.ToList());</code></pre>
<p>When we call a method over HTTP we receive a JSON encoded array of objects with two properties in response. If we consume that response in JavaScript it is easily converted into regular JavaScript objects. If, however, we call the method in C# directly without JSON conversion like this:</p><pre class="lang-clike"><code>// act
var result = _sut.TestMethod();</code></pre>
<p>the picture we'll see is quite different. Firstly, the result is an object of type <code>OkObjectResult</code>, which we can, of course, validate against a <code>null</code> value, but we are really interested in the value it wraps:</p><pre class="lang-clike"><code>// assert
Assert.NotNull(result);
Assert.IsType<OkObjectResult>(result);
var value = ((OkObjectResult)result).Value;
Assert.NotNull(value);</code></pre>
<p>If we set a break point and investigate the <code>value</code> we'll see that it's an enumerable (in our case a generic <code>List</code>) of compiler generated anonymous type that we cannot easily validate in the code. Attempting to cast the value to something that can be validated will most likely produce a runtime error. We could try and use reflection, but why don't we actually simulate the normal calling behavior and use JSON with the help of a Newtonsoft's JSON library?</p><p>We can convert our value into a JObject or a JArray if we expect an enumerable and then examine and validate objects' properties like this:</p>
<pre class="lang-clike"><code>var rows = JArray.FromObject(value);
foreach (var row in rows)
{
Assert.NotNull(row["Name"]);
Assert.NotNull(row["Type"]);
}
</code></pre>
<p>This kind of technique is easy to use and allows implementing all necessary types of assertion in the unit tests.</p><h3 style="text-align: left;">Best practice</h3><p>Using the technique described above it is possible to implement unit testing that works against anonymous objects but I would not call this the best practice. If it was possible I would definitely opt for redesigning the original method so it returns a strongly typed output that is 100% testable without any special efforts.</p>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-58811164117615689722020-12-18T16:28:00.002-05:002021-01-03T12:05:31.951-05:00Angular: Mock NgRedux for Unit Testing<p style="clear: both; text-align: left;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGbr9KrQ6H1GyzmgF7EYdOGUs1jbxIjeeM4ctVRpupyhZijSWVwdWBB2RaQFdvPxw8FF3nIE7OJrrYO4HzmGKDzlijlOSrLoExzIR1IDTYHiEXmpGFanw4_j0OnNPuLNCpdxrb-ZfvAAc/s220/angular.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="220" data-original-width="220" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGbr9KrQ6H1GyzmgF7EYdOGUs1jbxIjeeM4ctVRpupyhZijSWVwdWBB2RaQFdvPxw8FF3nIE7OJrrYO4HzmGKDzlijlOSrLoExzIR1IDTYHiEXmpGFanw4_j0OnNPuLNCpdxrb-ZfvAAc/w200-h200/angular.png" width="200" /></a></p><p>If you use <code>Redux</code> in your Angular web application (likely it is an <code>NgRedux</code> from Angular Redux) and develop unit tests for components that depend on the <code>Redux</code> store sometimes it's easier to mock the <code>Redux</code> store rather than deal with incorrect values that it may supply in a test scenario. If this sounds like your situation, read on.<span></span></p><a name='more'></a><p></p><p>If one searches the Internet for how to mock <code>NgRedux</code> they will find a number of suggestions and examples including the testing code in the NgRedux Git repository itself. Most of the suggestions I've found seem too elaborate and unnecessary to my taste considering that I don't want to spend any efforts on creating a mock NgRedux store class nor testing NgRedux related functionality. If you are in a similar mindset below is how I mocked the <code>NgRedux</code> an easy way.</p><h3 style="text-align: left;">The Problem</h3><p>There are components in an Angular application that depend on an application state that is implemented as an <code>NgRedux</code> store as <code>NgRedux<IAppState>></code>. The components receive a reference to the <code>NgRedux</code> store via Angular dependency injection in a constructor similar to:</p><pre><code class="lang-js">constructor(ngRedux: NgRedux<IAppState>, ...)</code></pre><p>When writing unit tests for the components with NgRedux dependency using Jasmin framework it is likely that the application state will not be initialized correctly in a unit testing environment and the component logic may fail because of an unexpected application state.</p><h3 style="text-align: left;">The Solution</h3><p>The easiest solution that I've found and works for me is mocking an NgRedux using Jasmin Spy when initializing a TestBed for a unit test. To do so two questions need to be answered: what dependency type will be provided and what methods need to be mocked. The answers are below:</p><pre><code class="lang-js">providers: [
{
provide: NgRedux,
useValue: jasmine.createSpyObj<NgRedux<IAppState>>(
['dispatch', 'configureStore', 'select'])
},
...
]</code></pre><p>The type and the value are self-explanatory based on what dependency is being injected into a component's constructor and the methods listed will most likely be used. Of course, based on a specific testing scenario, the <code>select</code> method can be mocked to provide a concrete value as well.</p><h3 style="text-align: left;">Alternative</h3><p>If more elaborate testing scenarios are required such as testing the select values or actions of the <code>NgRedux</code> store there is a <code>MockNgRedux</code> class that is a part of the NgRedux package and included in the <code>NgReduxTestingModule</code> module. Read more about using the <code>MockNgRedux</code> here https://github.com/angular-redux/store/blob/master/articles/intro-tutorial.md#unit-testing-selections.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-90355997304392845582020-11-27T19:20:00.005-05:002020-12-18T10:47:51.346-05:00.NET Core OData: How to Construct a DateTime Filter<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh145K4XsWeDvJTUSTGoiOvT5q7NM-ujoMZmTdyv-oF4KIL9jiU4SWh84231Fo9vU7nWxTib2fh0byc1vmophYiHTpVRu-UL2zxhuCn1JfyRiHsEVf9jc8pQnOEgLVJr-red0XcASxNuzo/s200/odata.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="200" data-original-width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh145K4XsWeDvJTUSTGoiOvT5q7NM-ujoMZmTdyv-oF4KIL9jiU4SWh84231Fo9vU7nWxTib2fh0byc1vmophYiHTpVRu-UL2zxhuCn1JfyRiHsEVf9jc8pQnOEgLVJr-red0XcASxNuzo/s0/odata.png" /></a></div>Say, we need to construct a query to an OData endpoint that filters records by DateTime values. Over the years as OData standard had been evolving there have been various different ways to work with DateTime columns depending on OData version and an implementation of an actual OData service that could have been based on various different frameworks. This being said, it's likely, that a single universal way of constructing a DateTime based filter clause for OData services might not exist. Take this under consideration, and note that the solution presented below had only been tested with OData services that are built with .NET Web API Odata v4.<span><a name='more'></a></span><p></p><p>As a reminder, as for any other data type, to filter records by specific column values we'll be using an OData $filter clause.</p><p style="text-align: left;">Technically speaking there is not much difference from the syntax point of view between DateTime filtering and other data types. The comparison operations are the same: lt, lg, eq, le, and gt. The only major difference is how to supply a DateTime value to the $filter clause.</p><p>Because OData service implementation may strongly depend on an underlying framework a particular type used to represent a date and time value may vary. Because of that, in a $filter clause the DateTime value should always be presented in a standardized form known as ISO format that in general looks like this: <code>YYYY-MM-DDThh:mm:ss.sTZD.</code></p><p style="text-align: left;">A DateTime value in the $filter clause should not be enclosed in ' or " otherwise it most likely will be recognized as a string. In addition, some implementations may require to cast a DateTime value in the $filter clause to a specific EDM type in order to work correctly. The last detail should be confirmed with a specific OData framework documentation.</p><p>Now, enough theory, let's look at some examples so it's all clear.</p><pre class="lang-html"><code>// the following examples may be equivalent
$filter=Birthday lt cast(2012-08-01T00:00:00.000Z,Edm.DateTimeOffset)
$filter=meetDate gt 2020-10-01T05:00:00.000Z
// filter records that have dates
$filter=meetDate ne null
</code></pre><p>Some OData implementations may support additional functions that can be used in the $filter clause as well:</p>
<pre class="lang-html"><code>$filter=year(VacationDate) gt 2005
$filter=month(VacationDate) ne 11
$filter=day(VacationDate) lt 8
$filter=hour(CheckInTime) eq 14
$filter=minute(CheckInTime) ge 30
$filter=second(CheckInTime) lt 59
$filter=date(Birthday) gt 1999-12-31
$filter=time(Birthday) lt 03:00:00.000
</code></pre>
<p>As a general rule, it's not recommended to use equality comparison when operating with DateTime values as a fraction of a second difference will render a comparison false. Try to use <code>lt, gt</code>, and <code>ne</code> operators to implement your logic unless you are dealing with precise numbers.</p><p>As a last quick tip, in JavaScript it is easy to obtain a DateTime value in ISO format using a built-in method:</p>
<pre class="lang-js"><code>const filter = '$filter=lastUpdated lt ' + filterDate.toISOString();
</code></pre>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-2255319077011299542020-11-20T12:42:00.046-05:002020-11-20T12:50:14.639-05:00Reuse a MemoryStream without Writing to a FileStream<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi8ytj9sW7FhL_uPUFke2OVe5DbDg4r_EHkOO_3enp8FkbCpwZZSvWIE2Hqkknh8wuBsTYe0UIxpl527CZIiYcDIW5h4-WYjoKDEbCXMLU1_h-7LcT6Zw4_UPSSLzGEV7wz7imJS_TAag/s300/c-sharp-logo.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="300" data-original-width="267" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgi8ytj9sW7FhL_uPUFke2OVe5DbDg4r_EHkOO_3enp8FkbCpwZZSvWIE2Hqkknh8wuBsTYe0UIxpl527CZIiYcDIW5h4-WYjoKDEbCXMLU1_h-7LcT6Zw4_UPSSLzGEV7wz7imJS_TAag/w178-h200/c-sharp-logo.png" width="178" /></a></div><p><code>MemoryStream</code> is a very useful class as it allows working with a Stream-like data in memory without having dependencies on any external resources like files, etc. Even though the <code>MemoryStream</code> implements an <code>IDisposable</code> interface it does not actually have any critical resources to dispose of, so, explicitly disposing of a <code>MemoryStream</code> object is not strictly necessary as the .NET memory management will take care of it. This specifics presents an opportunity of reusing the <code>MemoryStream</code> object if needed across multiple code scopes.</p><span><a name='more'></a></span><p>To do that safely and efficiently remember these easy tips:</p><p></p><ol style="text-align: left;"><li>Do not create a <code>MemoryStream</code> object inside a limited size inner scope like a small method, <code>try/catch</code>, <code>if/else</code> or <code>using</code> block as it will be disposed of automatically as soon as the code exists the scope. Instead, if you intend to reuse the object, create it in the most outer scope and pass it around as a reference.</li><li>Pass the reusable <code>MemoryStream</code> object as a method argument or a global variable. No need to worry about explicitly disposing of the <code>MemoryStream</code> as it will be disposed of automatically by the .NET memory management when no longer referenced.</li><li>As any other <code>Stream</code>, the <code>MemoryStream</code> works with data sequentially. So, when receiving a reference to a <code>MemoryStream</code> object in any code scope remember to position the stream to the beginning as the very first step. Otherwise you risk to be disappointed as your code won't find expected data at the current stream position. You can safely use something like</li></ol>
<pre><code class="lang-clike">myMemoryStream.Position = 0;</code></pre><p>For more information, read the official documentation here https://docs.microsoft.com/en-us/dotnet/api/system.io.memorystream.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-85600584589436466452020-11-06T15:20:00.002-05:002020-12-18T10:49:29.183-05:00Best Practices of Working with Date in JavaScript/TypeScript<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZYTH097hQxtMAaGvlabDWwNr2UVK1H6z2zViOLpLBbxXpiJUi7PulnYZ6ymRjsRF2IAvkWRz8ENGJGTE50W1BMT-Nl94iI2uA2mbysd777FANX4puvxHNcUmX8yBcTpeGGkUtR-kusj8/s200/js-icon.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="200" data-original-width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZYTH097hQxtMAaGvlabDWwNr2UVK1H6z2zViOLpLBbxXpiJUi7PulnYZ6ymRjsRF2IAvkWRz8ENGJGTE50W1BMT-Nl94iI2uA2mbysd777FANX4puvxHNcUmX8yBcTpeGGkUtR-kusj8/s16000/js-icon.png" /></a></div><p>Date is a built-in type in JavaScript/TypeScript (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date) and provides most of the basic needs for manipulating with Date and Time values in the code. Being one of the most well-defined and often used basic types of the language it still generates a lot of confusion, questions and erroneously working code among a wide developers' community. Let's try and figure out some of the best practices that should be applied to the JavaScript/TypeScript code when dealing with the Date type.<span></span></p><a name='more'></a><p></p><p>The very first thing every JavaScript/TypeScript developer must remember all the time is the <u>Date type is a Reference type</u>. It may be especially important for developers coming from the server-side strongly type languages like C# in which the DateTime (perhaps the closest analogue) is a Value type. Not in JavaScript! The implication of that is that any JavaScript object of Date type is passed around as a reference not a value, and any variable of Date type holds a reference to a Date object, not its value. Not remembering that will ultimately create a lot of buggy code and headaches and wasted debugging time. Here is an example.</p>
<pre><code class="lang-js">let d1 = Date.now();
let d2 = new Date('03/01/2020');
// expect a current date and 03/01/2020
console.log(d1.toLocaleDateString(), d2.toLocaleDateString());
d2 = d1;
d2.setFullYear(2020,09,30);
// expect both dates to be 09/30/2020
console.log(d1.toLocaleDateString(), d2.toLocaleDateString());</code></pre>
<p>Now, that we understand this baseline let's take a look at a few scenarios in which the Date needs to be passed around or shared across code boundaries and how it can be done in a safe and reliable way.</p><h4>Dates swap</h4><p>Say, we implement a logic where we need to keep the current date and a previous value. When we switch to the next current date we need to swap the values between the current and the previous dates. This is how it's done safely.</p>
<pre><code class="lang-js">swap(newDate: Date) {
...
// the getTime/setTime methods use a number value and
// do not expose/modify object references
d0.setTime(d1.getTime());
d1.setTime(newDate.getTime());
...
}</code></pre>
<h4>Receiving a Date value</h4><p>In this scenario in our code we receive a date value from the outside, something like a user interface control. We don't want to introduce a leakage from inside of our code and need to accept the value safely.</p>
<pre><code class="lang-js">onDateChange(date: string | Date) {
// incoming Date value could be either a string or a Date
// either scenario is protected from a reference leakage
// if myDate was not initialized before it's straight-forward
this.myDate = new Date(date);
// if we wanted to keep an instance
const newDate = new Date(date);
this.myDate.setTime(newDate.getTime());
}</code></pre>
<h4>Sharing a Date value</h4><p>Our code produces some datetime stamp that is shared to an external code, say, via an event. We need to make sure that we don't compromise the internal state of our code when external code deals with the shared value.</p>
<pre><code class="lang-js">nextDayEvent() {
// move one day ahead
this.eventStamp.setDate(this.eventStamp.getDate() + 1);
// wrap into another Date object to isolate the reference
this.dateEmitter.emit(new Date(this.eventStamp));
// or share a string
this.stringEmitter.emit(this.eventStamp.toUTCstring());
}</code></pre>
<h4>Conclusion</h4><p>Hope, the examples above illustrate the importance of following the best practices of using a Date type in JavaScript/TypeScript and will save you unnecessary headaches when trying to figure out why the code does not work as expected.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-38977318870691408192020-11-05T13:00:00.005-05:002020-11-05T19:06:33.541-05:00A Color Picker in Chrome DevTools<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqxik0e75qCZEfxnKfxvuCqZ0lPVpqtN0TkuTRa8xgFoql4StidD8XRMIJWsNNr7v7ghRvzKxzqyUIpmM3DQ2qw9HzCKjv9UWDbrIW6gMSgYEfg08gfCTChPoqBOdSUcujH5MFKZh1Ysw/s425/chrome_color_picker.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="425" data-original-width="306" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqxik0e75qCZEfxnKfxvuCqZ0lPVpqtN0TkuTRa8xgFoql4StidD8XRMIJWsNNr7v7ghRvzKxzqyUIpmM3DQ2qw9HzCKjv9UWDbrIW6gMSgYEfg08gfCTChPoqBOdSUcujH5MFKZh1Ysw/s320/chrome_color_picker.png" /></a></div><p style="text-align: left;">When you are working with CSS and specifically dealing with colors in HEX format it's very handy to be able to quickly find (or pick) a color value anywhere on your screen. There are tools for that, of course, namely, color picker browser extensions, that I used to have. Not anymore. Because, Chrome, that is my main browser, have a Color Picker built into the DevTools. How nice, eh?</p><p></p><p style="text-align: left;">Open the DevTools by clicking F12 (or use a menu) and in the Styles panel you can simply click on any color square and the Color Picker will open.</p><p style="text-align: left;">You can use a pipette tool on the left and while it's in the "ON" state and colored blue move it around the screen and click on any color and it will appear in the Color Picker with a HEX value.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-2474900035396683692020-11-02T23:29:00.152-05:002020-11-03T12:38:19.205-05:00New Code Syntax Highlighter Prism<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyAfXVwz_EkVyv7CmDf1_QLIwqJ7ocEfOvni0eirBIiAptvTtFUNI0Pj3-JvqRGLFI6xbODnE8kdtwDpJGVaxnv-Q58QEqqf-3j6ZvBjhkRwD7PBmTnvrFCxk169XZAjV0j5smrKuBxeM/s974/prism+logo.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="270" data-original-width="974" height="178" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyAfXVwz_EkVyv7CmDf1_QLIwqJ7ocEfOvni0eirBIiAptvTtFUNI0Pj3-JvqRGLFI6xbODnE8kdtwDpJGVaxnv-Q58QEqqf-3j6ZvBjhkRwD7PBmTnvrFCxk169XZAjV0j5smrKuBxeM/w640-h178/prism+logo.png" width="640" /></a></div><p>Everyone who posts code samples in their blogs needs a code highlighter. Why? Because without one the code samples would not look nice and readable and your readers would become unhappy and stop reading your blog. Don't like the sound of it? Use a code highlighter.<span></span></p><a name='more'></a><p></p><p>For over 10 years I have used a SyntaxHighlighter from Alex Gorbatchev. It was very simple to use and looked nice and readable. Recently, however, I've noticed that my blog takes over 5 seconds to load a page. When I've looked into a browser debugger I've immediately noticed a bunch of link loading errors on the page and all of them have been pointing to the locations of SyntaxHighlighter files that have been hosted outside of the blog on the author's web site. Well, not any more.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAcxONxcYf2_W4lcGCkdAh9BHD4MghLyx1kRhVLk7iBxPVR5DOBC9FAB0Q_tGB31IrXtWlnT7eOBs-MJUE6rWEtN3H0mDX-0YUHqy1ZNnZo_ajdKdKhR01ix6rhdrNPGE7c59iHsfh3HM/s479/sh_error.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="237" data-original-width="479" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAcxONxcYf2_W4lcGCkdAh9BHD4MghLyx1kRhVLk7iBxPVR5DOBC9FAB0Q_tGB31IrXtWlnT7eOBs-MJUE6rWEtN3H0mDX-0YUHqy1ZNnZo_ajdKdKhR01ix6rhdrNPGE7c59iHsfh3HM/s16000/sh_error.png" /></a></div><p>Too bad. As expected, most of the code samples have become invisible on the pages. As soon as I removed the links to the SyntaxHighlighter from the blog pages, though, it started loading fast again. So, the slowness was solved. Now, the questions is what do I use instead as a syntax highlighter?</p><p>After not so long Internet research I've settled on the PrismJS from prismjs.com. There are other active similar projects, of course, but the Prism seems to be very stable, well supported, widely used and simple. In addition, but very important it's supported by multiple public free CDNs and that should give me some peace of mind against sudden unavailability of the links how it happened with SyntaxHighlighter after over 10 years of use.</p><p>So, how do I use the Prism? First, add the links to the Prism files, specifically, a JavaScript file and a CSS file, to your page's HTML head's section. Only two links a needed and they should point to the CDN of your choice. I've chosen the CdnJs and here are the links:</p><pre><code class="lang-html"><link href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/themes/prism-tomorrow.min.css" rel="stylesheet"/>
<script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/prism.min.js" type="text/javascript"></script></code></pre><p>By adding the two lines of HTML to your page you've just installed the PrismJS on your web site. Now, to use it you need to surround your code sample with pre and code tags and use a CSS class on the code tag with the name "lang-xxxx" where xxxx is a language code. The most often used language codes are html, xml, js, clike, but many more are supported. See the list of supported languages here: https://prismjs.com/index.html#supported-languages. If you have html or xml tags in your code sample they have to be HTML encoded, i.e. replace "<" with "&lt;" and ">" with "&gt;".</p><pre><code class="lang-html"><pre><code class="lang-html">
Your code example
</pre></code></code></pre><p>Why am I telling you all that? First, in case you might have been affected by a sudden disappearance of the SyntaxHighlighter as well, and second, well... The formatting of Prism is not fully compatible with the one of the SyntaxHighligher as the HTML standards had changed significantly since then. So, a lot of manual reformatting in older articles will be required. I am doing that but it's a lot of work and it will take some time to complete. So, hang in there if you've decided to read one of the older posts that haven't been converted yet.</p>AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com1tag:blogger.com,1999:blog-602850767119300869.post-83846512181030428952017-05-01T10:47:00.005-04:002020-11-05T20:08:19.266-05:00Deploying a Web App from a Command Line using MSBuild and WebDeploy<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvgdOuTNETCb02IeA2U28u_kWbHKU8dlf2ogUv2301UL5HO-Fdgmu0tESxx8sJJN8nHjNv8yENAv8mTdxcmgOO6RwDzUgwpOt_Vyoybr4pB1Qs2BHuyRufEyG4mHiN22nAWXgt4WF5VdI/s1600/ci-01.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: justify;"><img border="0" height="183" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvgdOuTNETCb02IeA2U28u_kWbHKU8dlf2ogUv2301UL5HO-Fdgmu0tESxx8sJJN8nHjNv8yENAv8mTdxcmgOO6RwDzUgwpOt_Vyoybr4pB1Qs2BHuyRufEyG4mHiN22nAWXgt4WF5VdI/w200-h183/ci-01.png" width="200" /></a></div>
<br />
<div style="text-align: left;">
<span face=""verdana" , sans-serif">Modern CI/CD process especially in an agile/scrum environment often includes a deployment step so the automated and manual testing can be done. Depending on specifics of the development process it can be implemented in variety of ways but for a .NET Web Application the most convenient way is to call the MSBuild to publish a package to a target machine running IIS and instrumented with the Web Deploy.<span><a name='more'></a></span></span></div>
Most of the .NET developers are familiar with the publishing capability of the Visual Studio. Exactly the same result can be achieved from a command line using MSBuild which can be easily implemented in the most of the modern build systems like VSTS, TFS, TeamCity, etc.<br />
<br />
This article assumes familiarity with the Visual Studio, Web Deploy and all the necessary configuration procedures to make the deployment to a remote machine work.<br />
<br />
Let's start from refreshing how the Visual Studio publishing feature works so we understand the mechanics behind the scene and the main concept of using MSBuild for publishing.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoJan_mvoyCNPdy2pEu5Kt9yiw854OtZUTiQ8rO6bs9gd7xGUo8PwtaGiwXKxVcmKztl_hg4YSBGMDrhCT7w6D3frxlSYrLxDba4vU-McmIPRATYv9Q4p1-BOpbpiFvSJkpj5T9KzHarw/s1600/vs_publish.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="498" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoJan_mvoyCNPdy2pEu5Kt9yiw854OtZUTiQ8rO6bs9gd7xGUo8PwtaGiwXKxVcmKztl_hg4YSBGMDrhCT7w6D3frxlSYrLxDba4vU-McmIPRATYv9Q4p1-BOpbpiFvSJkpj5T9KzHarw/w667-h498/vs_publish.jpg" width="667" /></a><br />
A web project can be easily published from a Visual Studio using a publishing wizard that collects a number of parameters and then executes the publishing command.<br />
<br />
First, fill in all the parameters that represent our deployment environment, namely, Server: a target machine DNS name or IP, Site name: an IIS web site name, User name, Password: an IIS user's name and password that has access to the above web site. Then, click publish and the Visual Studio will try and connect to a target machine using the Web Deploy protocol, authenticate with a given user name and password, and command the Web Deploy agent to deploy a package to the given IIS web site.<br />
<br />
If the publishing was successful (because all values are correct and the configuration is right) we would save the set of parameters as a publish profile XML file that we could easily use in the future. Let's examine the content of the publish profile file. It could be pretty elaborated but we are only interested in the most important part of that file that describes the deployment parameters that we entered in the publishing wizard.<br />
<pre><code class="language-xml"><?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<WebPublishMethod>MSDeploy</WebPublishMethod>
<MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
<MSDeployServiceURL>server.contoso.com</MSDeployServiceURL>
<DeployIisAppPath>www.contoso.com</DeployIisAppPath>
<UserName>user_name</UserName>
<LaunchSiteAfterPublish>False</LaunchSiteAfterPublish>
<ExcludeApp_Data>False</ExcludeApp_Data>
<SkipExtraFilesOnServer>True</SkipExtraFilesOnServer>
<EnableMSDeployBackup>False></EnableMSDeployBackup>
</PropertyGroup>
</Project>
</code></pre>
<br />
The first two lines in the PropertyGroup define the publishing method: MSDeploy and WMSVC. If you are familiar with the Web Deploy, you understand that MSDeploy is the command line utility and the WMSVC is a Web Management service that runs on a target machine. That basically defines a communication protocol. Further, there are deployment target attributes and some parameters that define the Web Deploy's behavior. The only one thing we don't see is the user's password, but if it was there it would be wrapped in the "Password" tag.<br />
<br />
The important conclusion from this observation is this: we know the names of the parameters for all our values and we can try and map them to the MSBuild command line.<br />
<br />
MSBuild command line syntax allows passing a various number of a random parameters like this:<br />
<pre><code class="language-shell">msbuild /p:name=value</code></pre>
so we could combine a command line that looks like this:<br />
<pre><code class="language-shell">msbuild /p:WebPublishMethod=MSDeploy /p:MSDeployPublishMethod=WMSVC
/p:MSDeployServiceUrl=server.contoso.com /p:UserName=user_name /p:Password=password
/p:DeployIISAppPath=www.contoso.com /p:SkipExtraFilesOnServer=True ...</code></pre>
Basically, we just map the XML file tags to the names and assign the corresponding values. This is going to work but we also need to add some important additional parameters for the MSBuild itself, so the final version will look like this:<br />
<pre><code class="language-shell">msbuild /p:DeployOnBuild=True /p:AllowUntrustedCertificate=True
/p:CreatePackageOnPublish=True /p:WebPublishMethod=MSDeploy /p:MSDeployPublishMethod=WMSVC
/p:MSDeployServiceUrl=server.contoso.com /p:UserName=user_name /p:Password=password
/p:DeployIISAppPath=www.contoso.com /p:SkipExtraFilesOnServer=True</code></pre>
Finally, let's mention that when using this command line with a build system like VSTS or TeamCity it's recommended to replace values like URLs, names and passwords with the parameters that are defined in a separate scope. The publish command line adapted for VSTS or TFS would like like this:<br />
<pre><code class="language-shell">msbuild /p:DeployOnBuild=True /p:AllowUntrustedCertificate=True
/p:CreatePackageOnPublish=True /p:WebPublishMethod=MSDeploy /p:MSDeployPublishMethod=WMSVC
/p:MSDeployServiceUrl=$(IisHost) /p:UserName=$(IisUser) /p:$(IisPassword)
/p:DeployIISAppPath=$(IisSite) /p:SkipExtraFilesOnServer=True</code></pre>
</div><span><!--more--></span><span><!--more--></span><span><!--more--></span>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-43050691058565800782017-04-20T20:33:00.003-04:002020-11-05T20:09:40.509-05:00Finalized Agenda for the Global Azure Bootcamp 2017 Event<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs-d30qLTVA03hl0dkjKAIyAoZst7XLozz4gQRpRPTvGyk36Ar2x5DnyifrRdDDs3d7Rb_d4TJcs2PpCkEoGWuzQqZk5FN-RfZOU37-bk_UpokvrgE86apGCb-twr031QuN2lCsG5MqKc/s1600/azure_bc_17.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs-d30qLTVA03hl0dkjKAIyAoZst7XLozz4gQRpRPTvGyk36Ar2x5DnyifrRdDDs3d7Rb_d4TJcs2PpCkEoGWuzQqZk5FN-RfZOU37-bk_UpokvrgE86apGCb-twr031QuN2lCsG5MqKc/s16000/azure_bc_17.jpg" /></a></div><div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="text-align: left;">
<span face=""verdana" , sans-serif"><span style="font-weight: normal;">The finalized agenda for the <a href="http://blog.turlov.com/2017/04/global-azure-bootcamp-2017.html" target="_blank">Global Azure Bootcamp 2017 community event</a> is finally available. It is also updated on the <a href="https://www.eventbrite.com/e/azure-bootcamp-2017-mississauga-tickets-33555824374" rel="nofollow" target="_blank">event registration page</a>. The organizers (the <a href="http://www.architectureug.com/" target="_blank">Toronto Azure/Architecture User Group ArcUG</a>) managed to assemble a group of great speakers to cover an exciting list of topics</span>.</span></div><div style="text-align: left;"><span face=""verdana" , sans-serif"><br /></span></div><span face=""verdana" , sans-serif">The morning session will include one of the hottest topics "Big Data and Machine Learning" presented by <a href="https://www.linkedin.com/in/wirelesslife" rel="nofollow" target="_blank">Anthony Bartolo</a>, Microsoft technology evangelist, followed by related hands-on lab.<span><a name='more'></a></span></span><span face=""verdana" , sans-serif">The afternoon sessions will cover three connected topics: configuring IaaS on Azure using Azure Resource Manager and automation, building and deploying a web application on Azure, and instrumenting and monitoring a web application performance and health with Azure Application Insights, presented by local community leaders <a href="https://twitter.com/jlee_consulting" rel="nofollow" target="_blank">Jack Lee</a>, a certified Azure professional and the president of the North Toronto User Group, <a href="https://www.linkedin.com/in/gimhana" rel="nofollow" target="_blank">Gimhana Muthukumarana</a>, an Azure practitioner, MCP, and Toronto Azure Architecture UG lead, and <a href="https://www.linkedin.com/in/aturlov/" rel="nofollow" target="_blank">Alexander Turlov</a>, a software and cloud architect, MCSD, Agile and Azure practitioner. The presentations will be followed by practical labs.</span><br />
<span face=""verdana" , sans-serif"><br /></span>
<span face=""verdana" , sans-serif">The late afternoon session will be dedicated to another very hot topic: developing cross platform mobile solution using Microsoft Xamarin, presented by <a href="https://www.linkedin.com/in/mahdiali/" rel="nofollow" target="_blank">Mahdi Alirezale</a>, a Mobile Solutions Architect with Xamarin,</span><br />
<span face=""verdana" , sans-serif"><br /></span>
<span face=""verdana" , sans-serif">The entire event will be catered with plenty of time for networking with like-minded professionals.</span></div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-67836377292048743732017-04-13T13:52:00.005-04:002020-11-20T13:07:09.417-05:00Global Azure Bootcamp 2017<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="270" data-original-width="400" height="135" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhXlahwfLdb6sM3yMti5_vboc4fxYFBFdhVlhbqjW77mzuY8SPHx6EZGplqx_VVa2HO8ridRLL90L-u_3c1yqnbMVY5wxGbDpVT9xJ3D9IlFSRNK1kw_ntCHnUokMX1FX_aC_WJPWIxhQ/w200-h135/Global+Azure+Bootcamp+2017.png" width="200" /></div><span face="lato, arial, sans-serif" style="background-color: white; color: #666666;">All around the world user groups and communities want to learn about Azure and Cloud Computing! On </span><span face="lato, arial, sans-serif" style="background-color: white; border: 0px; color: #666666; font-stretch: inherit; font-weight: 600; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;">April 22, 2017</span><span face="lato, arial, sans-serif" style="background-color: white; color: #666666;">, all communities will come together once again in the fifth great Global Azure Bootcamp event! Each user group will organize their own one day deep dive class on Azure the way they see fit and how it works for their members. The result is that thousands of people get to learn about Azure and join together online under the social hashtag </span><a href="https://twitter.com/search?q=%23GlobalAzure" style="background-color: white; border: 0px; color: #3b8dbd; font-family: lato, arial, sans-serif; font-stretch: inherit; font-variant-numeric: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease 0s; vertical-align: baseline;">#GlobalAzure</a><span face="lato, arial, sans-serif" style="background-color: white; color: #666666;">! Join hundreds of other organizers to help out and be part of the experience!</span><br />
<a name='more'></a><br />
<div>
If you are in Toronto GTA join the <a href="http://architectureug.com/" target="_blank">Toronto Azure and Architecture User Group</a> Azure <a href="https://www.eventbrite.com/e/azure-bootcamp-2017-mississauga-tickets-33555824374" rel="nofollow" target="_blank">Bootcamp Event</a> for an exciting day of learning and networking with other Azure enthusiasts!</div>
<div>
<br /></div>
<div>
Please <a href="https://www.eventbrite.com/e/azure-bootcamp-2017-mississauga-tickets-33555824374" rel="nofollow" target="_blank">REGISTER</a> earlier as the seats are limited!</div>
<div>
<br /></div>
<div>
The event will be as interesting for Azure novices as for the seasoned professionals. The topics covered will include basic concepts like Azure Resource Manager and ARM templates for your IaaS deployments, building an ASP.NET MVC application with Azure DocumetDB from scratch and deploying it on Azure PaaS (Platform as a service), as well as more advanced concepts like instrumenting your web application with Application Insights and monitoring and analyzing the application's behavior, and the hottest concepts like machine learning, BOT services, and mobile development with Xamarin.</div>
<div>
<br /></div>
<div>
The event will be catered and located at Microsoft Canada office in Mississauga:</div>
<div>
<br /></div>
<div>
1950 Meadowvale Boulevard</div>
<div>
Mississauga, ON L5N 8L9</div>
<div>
<br /></div>
<div>
Please <a href="https://www.eventbrite.com/e/azure-bootcamp-2017-mississauga-tickets-33555824374" rel="nofollow" target="_blank">register now</a> to reserve your seat at the event.</div>
</div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-39769871108394687342017-04-03T12:11:00.005-04:002020-11-05T20:10:35.259-05:00Ingenious USB Flash Drive Repair Utility <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinNuCfDvIHULVu0NkCN74CjiNu5z1EALxXkR3ocUIenwN6csXwdSmSwUETOOb_MgCg_wWmnVpwuRwAOV6rUoKLFKlVzzkZ4RuLJdOyA-UmFAeCAH2f2kucdXpZLwV72BV8StYRPUJgIw4/s1600/hpusbdiskformat.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinNuCfDvIHULVu0NkCN74CjiNu5z1EALxXkR3ocUIenwN6csXwdSmSwUETOOb_MgCg_wWmnVpwuRwAOV6rUoKLFKlVzzkZ4RuLJdOyA-UmFAeCAH2f2kucdXpZLwV72BV8StYRPUJgIw4/w237-h320/hpusbdiskformat.jpg" width="237" /></a></div><div dir="ltr" trbidi="on">
<p><span>Every now and then it happens when you expect it the least: Windows fails to mount and read your trusted USB flash drive that is urgently needed and requests to format it. When you reluctantly agree after struggling to remember whether there was any important information on it (check point: never use USB flash drives for long term storage) you only see that Windows recognized just a tiny portion of the drive as accessible and it's useless because the file you need to copy is way larger. What to do?<span style="font-size: medium;"></span></span></p><a name='more'></a><span>For years I've been using this simple yet powerful utility that is called HP USB Disk Storage Format Tool and miraculously restored the full capacity of numerous flash drives of various sizes and brands. The utility is so simple it does not even requires installation: once downloaded it runs on double click and does the job. It is branded by HP (assuming Hewlett-Packard?) but you cannot find it on HP web sites.<br /><br /> Every time I needed it and did not have it on a computer I was able to Google it and find the whatever current version is available across many file download sites but every single time I asked myself: where exactly is the origin of this utility? (just because I like to download from the origin instead of another mirror and potentially avoid a malicious impostor). Never was able to find the origin and had to download from whatever location was found.<br /><br /> Just recently when I needed to do that again for a friend's flash drive I searched and found that it is now available as a <a href="https://www.blogger.com/#">chocolatey package</a>. For those who does not know what <a href="https://www.blogger.com/#">Chocolatey</a> is just go there and learn but in a nutshell it is a command line installation utility for Windows that simplifies the software installation tasks and (more importantly) keeps a versioned repository of useful software that is safe to install from.<br /><br /> Chocolatey user experience is minimalist and tech/nerd oriented but very easy and quick to use and all in all worth to keep around. Once you install the Chocolatey on your computer you can easily install the HP USB Disk Storage Format Utility (the safe and latest version) like this:</span><p></p>
<br />
<pre><code class="lang-batch">C:\> choco install hpusbdisk</code></pre><br />
<br /><span>Now, keep in mind that because there is no installer for this utility you need to double click on the file and by default the Chocolatey installs it in the C:\ProgramData\chocolatey\lib folder. To simplify usage you may want to navigate to that place and create a desktop shortcut but if you are more comfortable with the command line the Chocolatey repository is now in the PATH and you could just type in your CMD and enjoy:</span><br /><br />
<pre><code class="lang-batch">C:\> hpusbdisk</code></pre></div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-87666105397723701852017-02-17T13:41:00.001-05:002020-11-03T12:43:15.749-05:00Intel Network Cards are a Disappointment: still no solutions<div dir="ltr" style="text-align: left;" trbidi="on">
<table><tbody>
<tr><td><div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl1BYCfU8FZYXP0iy8S_P8KRTSbUT3Cgb5bTdbntMJaPtCECIKSOaBD4Z8H_XAlvFXDQT7G-azg5yjoqfgfWJxyXIhydsPnARQLPIQEYMZQdMSOEoqQMH_vuAWbUHJlB6yV37AvzszX6I/s1600/intel_logo.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="211" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl1BYCfU8FZYXP0iy8S_P8KRTSbUT3Cgb5bTdbntMJaPtCECIKSOaBD4Z8H_XAlvFXDQT7G-azg5yjoqfgfWJxyXIhydsPnARQLPIQEYMZQdMSOEoqQMH_vuAWbUHJlB6yV37AvzszX6I/s320/intel_logo.png" width="320" /></a></div>
</td><td>For years, Intel network cards both wired and wireless have been a standard of quality, performance and compatibility in my book. Unfortunately, this situation is changing rather quickly. Recently, the quality of hardware and software drivers of the Intel products have been disappointing. There are compatibility problems with Windows 10 and I haven't found solutions to resolve the problems described below. I am sharing this information in the attempt to help other people avoid the problems if they can.</td></tr>
</tbody></table>
<div>
<span style="background-color: #ffd966;"><i></i></span><br />
<a name='more'></a><br />
<i>Note</i><span style="background-color: #ffd966;">. The Intel network cards in my past experience had always been a top grade standard of quality and performance. Unfortunately, not any more and I don't feel any loyalty to the brand any longer.</span></div>
<div>
<span style="background-color: #ffd966;"><br /></span></div>
<h2 style="text-align: left;">
Problem 1</h2>
<h3 style="text-align: left;">
Intel Wireless + Bluetooth notebook adapter Intel Centrino® Wireless-N 2230</h3>
<div>
After upgrading to <span style="background-color: #9fc5e8;">Windows 10</span> this adapter does not work any more. The problem is easily found using an Internet search with numerous reports and no solutions. The problem is that the adapter drops a Wi-Fi connection quickly and frequently and after a few drops disconnects completely with a hardware error entry in the Windows event log.</div>
<div>
<br /></div>
<div>
<span style="background-color: #93c47d;"><i>Note</i>: under Windows 8.1 this adapter worked flawlessly.</span></div>
<div>
<br /></div>
<div>
Intel claims they have released the latest drives in the Windows 10 distribution and doesn't have any updates. Windows 10 does recognize the adapter but it misbehaves consistently with the mentioned driver. The only resolution if you want to keep Windows 10 is to replace an adapter with another brand.<br />
<br />
<span style="background-color: #e06666;"><i>Note</i>: an attempt to use the older Windows 8.1 drivers with Windows 10 did not help: the older driver misbehaves exactly the same way.</span></div>
<h2 style="text-align: left;">
Problem 2</h2>
<h3 style="text-align: left;">
Intel wired Intel Ethernet Connection I219-LM</h3>
<div>
The adapter randomly but regularly disconnects form the network with a warning entry in the event log under <span style="background-color: #9fc5e8;">Windows 10</span> on a brand new computer. No driver updates exist nor help with this problem. Numerous reports and no solution can be quickly found on the Internet. This is a very annoying problem as Windows 10 does not really show that the adapter has lost a connection in the icon tray because no error is reported just a warning. You can only figure that out once a program reports that it cannot connect to the Internet. The only remedy is to disable and re-enable the adapter in the network settings. Most of the complaining people install an alternative PCI or USB adapter of a different brand.</div>
<div>
<br /></div>
<div>
If you have anything to add please leave a comment below.</div>
</div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-27455158133103970682017-01-06T20:18:00.004-05:002023-04-08T16:25:56.348-04:00VPN Site-to-Site connection between Azure VPN Gateway and pfSense: Important Details<div dir="ltr" style="text-align: left;" trbidi="on">
<table><tbody>
<tr><td><div class="separator" style="clear: both; text-align: justify;">
</div>
</td><td><span style="font-family: Times, Times New Roman, serif; font-size: large;"><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkTcbDr-pFcPLf0HlkjL0MWBIYANt2xjDkx0vNxdp2Dmcku9o0jiauuhSqTuLpij_JtMRig78bALg_64NnCH7_US_NQxgmYLcoS80WNVfx7Ba-fTNUPxflPap5ravH8Q2KoveokXZSR0k/s1600/az-gateway.jpg" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkTcbDr-pFcPLf0HlkjL0MWBIYANt2xjDkx0vNxdp2Dmcku9o0jiauuhSqTuLpij_JtMRig78bALg_64NnCH7_US_NQxgmYLcoS80WNVfx7Ba-fTNUPxflPap5ravH8Q2KoveokXZSR0k/s16000/az-gateway.jpg" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"></td></tr></tbody></table></span><div><span style="font-family: Times, Times New Roman, serif; font-size: medium;">When implementing an IaaS solution on Azure for, let's say, hosting a web application, it is important to have a controlled secure access to the Azure network from the office site in order to limit an exposure of the Azure virtual machines to the public Internet and prevent unauthorized access to the Azure network resources. Standard solution for this requirement is a site-to-site VPN that allows continuous encrypted connection between two networks: a physical office network and a virtual network on Azure.</span></div></td></tr>
</tbody></table>
<a name='more'></a><p style="text-align: left;"> There is an out of the box Azure solution for the site-to-site VPN: virtual network Gateway. It's a fairly straightforward IaaS component that supports site-to-site VPN over IPSec protocol and point-to-site VPN access point. It's well documented on Azure https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal.</p>
In the scope of this article I will not be explaining basics of this solution but rather focus on some specific details of implementing this solution for one particular scenario: in the office site the VPN is implemented on pfSense firewall. Specifics of such an implementation are those:<br />
<ul style="text-align: left;">
<li>Azure does not have a standard VPN gateway configuration to connect with pfSense;</li>
<li>Azure requires IPSec to be used for site-ti-site VPN which is rather tricky to configure on pfSense;</li>
<li>Particulars of IPSec configuration are not documented.</li>
</ul>
There are articles written about this scenario on the Internet (just do a search) but even with the most detailed explanation and screenshots it may take some time and debugging efforts in order to make it work. On a flip side, once it works it most likely will be very reliable and trouble-free. Let's waist no more time and get right to the matter.<br />
<br />
Start from creating an configuring a VPN gateway on Azure as this is a prerequisite. Follow the Azure documentation in the list below:<br />
<ul style="text-align: left;">
<li>https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-site-to-site-create</li>
<li>https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpn-devices</li>
<li>https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-configure-vpn-gateway-mp</li>
</ul>
<div>
Once you have the VPN gateway created and the VPN device configured, it's time to configure the pfSense. Keep in mind that because the pfSense is not a supported Azure VPN device you won't be able to get technical help form Azure, so pay attention to the details.</div>
<h3 style="text-align: left;">
Step 1. Configure pfSense VPN IPSec Phase 1</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0bdk489A04E_k9NmzSfIiXUraohdND9Dece9lKRp30DD6rTXu0fXL56hQVS971b1cUC5y5Sof47UPEuF_BUeXEiVTtuUToq9TxlMk2nG6fDdrytUAzcs30FBQIK1aiu0_v06skxPpeGY/s1600/pfs-ipsec-ph1.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0bdk489A04E_k9NmzSfIiXUraohdND9Dece9lKRp30DD6rTXu0fXL56hQVS971b1cUC5y5Sof47UPEuF_BUeXEiVTtuUToq9TxlMk2nG6fDdrytUAzcs30FBQIK1aiu0_v06skxPpeGY/w497-h640/pfs-ipsec-ph1.jpg" width="497" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<h3 style="clear: both; text-align: left;">
Step 2. Configure pfSense VPN IPSec Phase 2</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgisEnabWsebWVutmgwDelQKYrSNGO8HGZWMxPPRLy_dc8-jL12YVHPNeR3C5jqS25lTBP5GGlbclxC_HbErSMfLYhyphenhyphenbF-cKoFseK1d5SPIXJA0psL4h8uPZRb1iOJTLrsiD2ob1KKo5WE/s1600/pfs-ipsec-ph2.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgisEnabWsebWVutmgwDelQKYrSNGO8HGZWMxPPRLy_dc8-jL12YVHPNeR3C5jqS25lTBP5GGlbclxC_HbErSMfLYhyphenhyphenbF-cKoFseK1d5SPIXJA0psL4h8uPZRb1iOJTLrsiD2ob1KKo5WE/w554-h640/pfs-ipsec-ph2.jpg" width="554" /></a></div>
<br />
<h3 style="text-align: left;">
Step 3. Connect and enjoy</h3>
If you configured everything exactly how it's shown on the screenshots the connection should be established and stay connected.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtIDW4zCrUsM0Fhqd04RF5e-LG-rS2JcDyGuN-Zj3FRiMXcqYd4YrtayyEsyXfDKztbwiZWNvKDG_KmuVjbX7-qJ-xQsW3jSlNVRy4xdGLU0fguIPMgN_hYn7406X9m3jV5xlpB3NCk0k/s1600/pfs-ipsec-stat.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtIDW4zCrUsM0Fhqd04RF5e-LG-rS2JcDyGuN-Zj3FRiMXcqYd4YrtayyEsyXfDKztbwiZWNvKDG_KmuVjbX7-qJ-xQsW3jSlNVRy4xdGLU0fguIPMgN_hYn7406X9m3jV5xlpB3NCk0k/w640-h173/pfs-ipsec-stat.jpg" width="640" /></a></div>
<h3 style="clear: both; text-align: left;">
Troubleshooting</h3>
<div class="separator" style="clear: both; text-align: left;">
If pfSense and Azure gateway cannot connect most likely there is a mistake in a configuration. Verify thoroughly that all the pfSesnse settings are exactly as shown on the screenshots. If you don't see required options in the settings your pfSense is outdated and you need to upgrade to a more recent version that supports all the required protocols and algorithms.</div>
</div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-64973525977517501672015-12-29T15:41:00.001-05:002020-11-03T12:44:50.787-05:00Easily add config transformation files to a Console App project<div dir="ltr" style="text-align: left;" trbidi="on">
This is a very short post. Not digging into a theory of config transformations just add this add-on <a href="https://visualstudiogallery.msdn.microsoft.com/579d3a78-3bdd-497c-bc21-aa6e6abbc859" rel="nofollow" target="_blank">Configuration Transform</a> to your Visual Studio and you'll be able to right click on an App.config file and see "Add Config Transforms" in the context menu. Works just fine.<br />
<div>
<br /></div>
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPtIOQg4zgrWRH9QCyNoXz8uDoxMHLkF7D1hZx63VgsUL_SdpMCVdEYKadUYG8__UJ_IreR-BAaGDIaNoyFz85qlWoZawJePqJA8grX34UOTXbP7qQSdZIj66DeDMfrfLa6MdGhZkZepg/s1600/package_preview.png" imageanchor="1"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPtIOQg4zgrWRH9QCyNoXz8uDoxMHLkF7D1hZx63VgsUL_SdpMCVdEYKadUYG8__UJ_IreR-BAaGDIaNoyFz85qlWoZawJePqJA8grX34UOTXbP7qQSdZIj66DeDMfrfLa6MdGhZkZepg/s320/package_preview.png" width="320" /></a></div>
</div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-22675862076864891922015-10-30T11:53:00.002-04:002020-11-03T12:45:45.458-05:00Using Google Tag Manager to deploy Azure Application Insights client-side monitoring<div dir="ltr" style="text-align: left;" trbidi="on">
<table><tbody>
<tr><td><div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqO23qup1eWODwV2wNT-8RR5hiubSNxVn01ng06sdgvLKCTzG8PIA8PCM8bpAmyei6-eQ-vZI5NL3E3jB6vspDFncO-gvqtH5tS5xRSX7zGqglQ4zL5MQhMc9OoylittmWiypsRoxJlN8/s1600/app_ins_logo.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="168" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqO23qup1eWODwV2wNT-8RR5hiubSNxVn01ng06sdgvLKCTzG8PIA8PCM8bpAmyei6-eQ-vZI5NL3E3jB6vspDFncO-gvqtH5tS5xRSX7zGqglQ4zL5MQhMc9OoylittmWiypsRoxJlN8/s320/app_ins_logo.png" width="320" /></a></div>
</td><td><span face="Verdana, sans-serif">If your web application is hosted on Microsoft Azure and you are using Application Insights for monitoring you'll have to add a piece of JavaScript on web pages to collect client-side statistics like page load time, JavaScript errors, users and sessions analytics, etc. The script can be found on Azure portal when Application Insights is enabled for a web application.</span></td></tr>
</tbody></table>
<br />
<a name='more'></a><br />
Once you enable your web application for Application Insights monitoring, the JavaScript can be found on Azure portal and it should look similar to this (don't pay attention to formatting):<br />
<pre><code class="lang-js">
var appInsights=window.appInsights||function(config){
function r(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},u=document,e=window,o="script",s=u.createElement(o),i,f;for(s.src=config.url||"//azyyyyyyy.vo.msecnd.net/scripts/a/ai.0.js",u.getElementsByTagName(o)[0].parentNode.appendChild(s),t.cookie=u.cookie,t.queue=[],i=["Event","Exception","Metric","PageView","Trace"];i.length;)r("track"+i.pop());return r("setAuthenticatedUserContext"),r("clearAuthenticatedUserContext"),config.disableExceptionTracking||
(i="onerror",r("_"+i),f=e[i],e[i]=function(config,r,u,e,o){var s=f&&f(config,r,u,e,o);return s!==!0&&t["_"+i](config,r,u,e,o),s}),t
}({
instrumentationKey:"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
});
window.appInsights=appInsights;appInsights.trackPageView();
</code></pre><br />
Standard Azure instruction says:<br />
<blockquote class="tr_bq">
<i>To collect end-user usage analytics about your application, insert the following script into each page you want to track. Place this code immediately before the closing tag, and before any other scripts. Your first data will appear automatically in just a few seconds.</i></blockquote>
The trouble with this is you may need to modify many web pages and then do the live deployment before any data will be collected, and then if you are not happy with something, repeat the cycle over again. It's <b>time consuming and counter productive</b>. What's the solution then?<br />
<br />
You you already use Google Analytics on your web site you may be familiar with Google Tag Manager, aka GTM, if not Google it up. In short GTM allows dynamically adding random blocks of JavaScript on live web sites without making any changes to the code and control JavaScript block presence from the Web UI. <b>Convenient, efficient, and provides good control</b>.<br />
<br />
In general GTM works out of the box with no or little troubleshooting required but some scripts may not work with it. Many script providers usually state their known compatibility level with GTM. Unfortunately <b>Azure does not say in their documentation anything about GTM</b>. Regardless, first it's worth a try, and secondly, it does work.<br />
<br />
<b><u>The Application Insights JavaScript for client-side analytics does work with Google Tag Manager and requires no changes nor troubleshooting</u></b>. Clear pass. Give it a try and enjoy the ride.
</div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-90295385442916147272015-06-11T14:54:00.001-04:002020-11-03T12:46:14.605-05:00WebDav Client for Windows<div dir="ltr" style="text-align: left;" trbidi="on">
Recently I've been involved in migrating JIRA Server to JIRA Cloud and one particular required step was to upload exported JIRA data files on to Atlassian cloud storage. Atlassian supports WebDav protocol but they have no suggestions about what client software to use most likely because of the large diversity of what operating systems their customers use.<br />
<br />
I was working on Windows 8/Server 2012 and did not have previous experience nor preference for a particular WebDav client. Google search returns a number of choices on the first page but it's hard to pick one based just on reading so I ended up installing and trying a number of different software packages. Without further ado to save time to someone who is facing similar exercise my tool of choice happened to be BitKinex. Extremely easy and intuitive to use, works immediately with no additional configuration, and provides familiar Windows Explorer user experience with drag and drop support. Thumbs up!<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKTQS-7PJZCj0nNuYdF7SRf1CIgq_meCid7qsamihRMW4rCAHNSggLuM2jHJym1jPJhEknEJkZVky7l17RdQkuRlj7wUs0IEENtOgFgr1mFw9UKHiHzpfeHiNXH8p5DULEzzlSdy3w6bc/s1600/BitKinex.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKTQS-7PJZCj0nNuYdF7SRf1CIgq_meCid7qsamihRMW4rCAHNSggLuM2jHJym1jPJhEknEJkZVky7l17RdQkuRlj7wUs0IEENtOgFgr1mFw9UKHiHzpfeHiNXH8p5DULEzzlSdy3w6bc/s640/BitKinex.jpg" width="640" /></a></div>
<br /></div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-83321351273125166602015-05-05T13:59:00.000-04:002015-06-11T14:57:41.356-04:00Controlling console output from a windows task<div dir="ltr" style="text-align: left;" trbidi="on">
Windows tasks continue to be very handy especially for automating batch operations that do not require constant human attention. But even such operations may require occasional human interference to troubleshoot problems.<br />
<br />
Most of the Windows tasks execute command line programs that do not have graphical user interface but produce output in form of log files or console messages. While monitoring of log files can be relatively easy automated via different kinds of log collection services, the console output that is mostly meant for human eyes and could provide helpful information is usually lost as by default Windows task does not save console output anywhere.<br />
<br />
There is however a relatively easy way to save console output produced by a Windows task in a text file. Windows command line environment provides with a function to redirect standard program output to <i>stdout</i> or <i>stderr</i> to a file using redirection commands '>' or '>>'. The first one redirects that output to a file and replaces the file if it exists with a new version and the second one appends the output to a file if it exists.<br />
<br />
In order to use output redirection a Windows task action should be configured to execute a CMD shell instead of an actual cl program. So instead of<br />
<br />
<span style="background-color: black; color: #eeeeee; font-family: Courier New, Courier, monospace;"><b>>myprogram.exe > output.txt</b></span><br />
<br />
it should be configured as<br />
<br />
<span style="color: #eeeeee; font-family: Courier New, Courier, monospace;"><b style="background-color: black;">>CMD /C "myprogram.exe" > output.txt</b></span><br />
<br />
In the first example of direct calling a program Windows task will ignore output redirection and the entire output will not be saved. In the second example the redirection will be executed not by a Windows task but by a CMD shell instead and the output will be saved in a file.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6Smp6IM8XBH-_r7Q2-Y562o9jonGxJhyNlG2Un8Zcf9Gq2qkgQTdUGSDcpJ-4E5gSKJRoGrs1r7DcyhpOEWN6Doht-U-JqoZuJ1FAA09_Xdd82vNTUhYCFinHahFkQB4oyhO71QG4lVk/s1600/wintsk0.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6Smp6IM8XBH-_r7Q2-Y562o9jonGxJhyNlG2Un8Zcf9Gq2qkgQTdUGSDcpJ-4E5gSKJRoGrs1r7DcyhpOEWN6Doht-U-JqoZuJ1FAA09_Xdd82vNTUhYCFinHahFkQB4oyhO71QG4lVk/s320/wintsk0.jpg" width="294" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDF0JAW0a-s9rVCyrvOFYcHpJSuKp80ytPN47PqyxnVxJUKm2G5rbbRUlfd5JXXNX-KSoUJTtlXuTlmYP99T3JAhpxiScDrFd4TxnnqE1rNmARR0qyi0bQsdlaovRCWQV4x8zwrZ5_MQE/s1600/wintsk1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="148" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDF0JAW0a-s9rVCyrvOFYcHpJSuKp80ytPN47PqyxnVxJUKm2G5rbbRUlfd5JXXNX-KSoUJTtlXuTlmYP99T3JAhpxiScDrFd4TxnnqE1rNmARR0qyi0bQsdlaovRCWQV4x8zwrZ5_MQE/s320/wintsk1.jpg" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQzKjVKsuwWNMqikI5W98lziaWuB7rhQvuzYaGAlksehSCJZ0EUcKzZqS2yhFmmdsiGLpP7KXCzn8JL0vI6erJfY7w9qiDPdvoh5WUUYLqFdbQg_I68_KSnWoxgM2wNRg3hRSZQbEnyI0/s1600/wintsk2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQzKjVKsuwWNMqikI5W98lziaWuB7rhQvuzYaGAlksehSCJZ0EUcKzZqS2yhFmmdsiGLpP7KXCzn8JL0vI6erJfY7w9qiDPdvoh5WUUYLqFdbQg_I68_KSnWoxgM2wNRg3hRSZQbEnyI0/s320/wintsk2.jpg" width="320" /></a></div>
<br /></div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0tag:blogger.com,1999:blog-602850767119300869.post-76982478198460000382014-09-19T13:08:00.002-04:002020-11-02T21:56:13.471-05:00404 Error in IIS for a Url with a plus + in the Path<div dir="ltr" style="text-align: left;" trbidi="on">
Default Internet Information Server (IIS) behavior since IIS 7 rejects requests to Urls with a plus (+) sing in the path like:<br />
<br />
<blockquote class="tr_bq">
www.somesite.com/one+two or<br />
www.somesite.com/path/subpath/three+four</blockquote>
<br />
The behavior is considered a security feature and controlled by a setting called "Double Escaping Filtering". To override the default rejecting behavior the setting needs to be changed either via a web.config configuration:
<br /><br />
<pre><code class="language-xml">
<system.webServer>
<security>
<requestfiltering allowDoubleEscaping="true" />
</security>
</system.webServer>
</code></pre>
or an IIS Management user interface:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjokDJUVH0L38lAmpuPqN_FAPxMEPe-zTf8GfC0N8_fXa7Xrvl244-AW476IjWPTci_ymoNipQAd0p9HLh084X79yb58zIF2pwDdhiwxo41S4EOktegI0Am3Ya1UeEXzPFPrTvjD5zG3XE/s1600/IIS+Double+Escaping.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="459" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjokDJUVH0L38lAmpuPqN_FAPxMEPe-zTf8GfC0N8_fXa7Xrvl244-AW476IjWPTci_ymoNipQAd0p9HLh084X79yb58zIF2pwDdhiwxo41S4EOktegI0Am3Ya1UeEXzPFPrTvjD5zG3XE/s1600/IIS+Double+Escaping.jpg" width="640" /></a></div>
</div>
AThttp://www.blogger.com/profile/03810972326430206042noreply@blogger.com0