Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

Wednesday, August 12, 2015

'Sorting' is not allowed during an AddNew or EditItem transaction

Recently I resolved one WPF GUI bug related with the exception:
'Sorting' is not allowed during an AddNew or EditItem transaction.

This stackoverflow discussion has the correct answer to my issue, but it isn't marked as the answer for now.  So I would like to list it here and explain a bit, and hope this can help somebody out.

Background

We had a datagrid in a customized control, which is used to select files.  The control
has a ViewModel class as its DataContext and can load different ViewModel classes.  The control itself will stay in memory, but the ViewModel will get disposed every time the view is closed.  

It works fine most of the time.  However, if users sort the datagrid by clicking a column header and then select a file by double click, and next time when the view is opened, the exception:
'Sorting' is not allowed during an AddNew or EditItem transaction.
will be thrown.

I checked the stack trace of the exception and saw the exception came from System.Windows.Data.ListCollectionView.SortDescriptionsChanged().  When using ILSpy to check ListCollectionView class, we can see this code in
SortDescriptionsChanged()
            if (this.IsAddingNew || this.IsEditingItem)
            {
                throw new InvalidOperationException(Resx.GetString("CollectionView_MemberNotAllowedDuringAddOrEdit", new object[]
                {
                    "Sorting"
                }));
            }
 

Solutions

ptsivakumar listed 2 solutions which both are good in my case.  I listed the solutions here in case that thread is not accessable.

1)  Add CommitNew() and CommitEdit()
This should be put in a proper place suitable for your specific case.  In my case, I put it in the view Close event handler:

            var dataView = (ListCollectionView)CollectionViewSource.GetDefaultView(DataGrid_Files.ItemsSource);
            if (dataView.IsEditingItem)
                dataView.CommitEdit();
            if (dataView.IsAddingNew)
                dataView.CommitNew();

2)  Add IsReadOnly="True" to the datagrid control, also remember to add CanUserAddRows="False"


Both solutions are working well.  The reason is when the datagrid is double clicked, it will enter the edit mode, but the sorting feature doesn't allow the datagrid in Adding New or Editing mode when a sorting event happens.  So the first solution will commit changes and bring the datagrid to a normal mode.  The second solution works because the settings will keep the datagrid always in a normal mode.  Double click is ignored.

Friday, April 3, 2015

Memory leak caused by RegisterClassHandler of EventManager in WPF

Recently I spent some time to fix performance and memory usage issues in one big WPF project.  One critical bug reported was that every time users open an editing dialog, the memory will jump up around 5M, even users immediately close the dialog without any meaningful operations.  The memory will continue growing and never gets released.

Investigation

First I spent some time to understand the related code.  The editor view has an embedded child view EditorContent, and the child view too has 2 grandchildren views embedded.  And every view has a ViewModel class paired.  My strategy was to temporarily remove some child classes and find which View or ViewModel brings the problem.


This time I used Telerik's JustTrace to do the memory profiling.  After several rounds of trying, finally I located the problem in the child view EditorContent.  After carefully comparing the 2 snapshots before and after opening the editor dialog, I noticed there were some suspected events and controls along the GC root.  The control is called BarcodeTextBox and derived from the standard TextBox.

At the same time, I searched around the Internet and tried to find some clues. These 2 articles from Jeremy Alles and Devarchive.net have good hints.  When I went back to check the BarcodeTextBox class, I did find EventManager.RegisterClassHandler() in the code and it did exactly what was mentioned in the above 2 articles.


            EventManager.RegisterClassHandler (typeof(BarcodeTextBox), PreviewKeyDownEvent,
                new KeyEventHandler(HandlePreviewKeyDown));


So the fix is easy.  Either move the above code to a static constructor, or declare the event handler HandlePreviewKeyDown() as a static handler.  After this was done, the memory was released just after the dialog was closed.


Afterthought

From MSDN, I haven't found that it's mandatory to use RegisterClassHandler() in a static constructor or declare the handler as static.  But the MSDN example does use in that way.  And from the name itself, the handler should be at class level, not at instance level.

Checking EventManager from ILSpy, we can see
RegisterClassHandler() calls GlobalEventManager to add the handler to a global event handler array.  By the name, we can guess the handler (therefore the owner object) will be held forever if no explicit unregister method is called.  Unfortunately, the static EventManager class even doesn't have any methods to unregister an event handler.  So I guess by design, these handlers registered in this way should be like static functions, and shouldn't exist at the instance level.

From what I investigated, event is really dangerous in C# world.  Every time we use "+=" to add an event handler or register one, we should always try to find a way to unregister.  For example, I found this one.  The scenario is very tricky and could happen very likely.

Wednesday, March 4, 2015

Unnecessary memory usage when work with Entity Framework

Recently I was asked to fix a performance issue in one of our WPF projects.  Here is a little background about the project.  This project is built on Prism 4.1 and Entity Framework 5.  We used database first approach, so all the entity classes were generated from existing database tables.  We also used MEF (Managed Extensibility Framework) to export and import Views and ViewModels, so our first level Views and ViewModels were loaded during the application launching, which basically made those objects static.

We had a requirement that the system should at least analyze and store 120 user data files, which was the maximum data files a user could get in one month.  However when it was close to 60 files, the system threw an "out of memory" exception and stopped working.

Investigation

First I suspected that we had some memory leaks, so I started using memory profilers to locate possible problems.  I tried Visual Studio build-in performance profiler.  Since I used this tool for CPU related performance diagnostics previously, and got some good outcomes, I thought this is the tool I should use.  However, this time I was unlucky.  First it was so slow.  Generating analysis reports seemed taking forever.  And then when I was waiting, I found this blog, which said that the memory profiler
doesn’t support WPF applications on Windows 7.  So basically what I was running was sample profiling, useful for CPU performance checking, but not for memory.

Later I switched to the trial version of JetBrains dotMemory, and found this is a really good tool, fast and easy to use.  From this tool, I observed that after one single file (I used the same file to test) was loaded to the system, the system memory went up by around 20M.  After loading more than 50 files, the memory went to 1.2G and in some point an exception was thrown.

Finally I found a suspected part where some unnecessary memory was allocated.  Since this part was in a static ViewModel, it was
never released.  The scenario was like the following.  The application needed to create an object A for the loaded data file.  A had a child property C.  In order to get C, we had to call some functions to Get object B, which also had a property C.  After got B, we directly did this
    A.C= B.C

This is the normal thing we do all the time.  Normally this should be fine.  Even C is created as a child of B, B still can be garbage collected and released although C is still held by A.  However, since our C and B are classes from Entity Framework, they have reference to each other, something like this:
    class B
    {
        ICollection List_C;
    }

    class C
    {
        B B;
    }

So if somehow we set C point to B, not null, B will not be GC and released.

Demonstration

In order to prove what I guessed, I wrote the following test program.  I allocated a big chunk of memory in the Super class, so I can see the memory jumps dramatically.


    public class RealResult        // This is class A
    {
        public Detail Detail { get; set; }
        public string Name { get; set; }
        public string OtherProperty { get; set; }

        public RealResult()
        {
            Detail = new Detail();
        }
    }

    public class Detail        // This is class C
    {
        public string Name { get; set; }
        public string Description { get; set; }
        public SuperResult Super { get; set; }
    }

    public class SuperResult    // This is class B
    {
        public Detail Detail { get; set; }
        public byte[] BigData { get; set; }

        public SuperResult()
        {
            Detail = new Detail();
            BigData = new byte[10000000];
        }
    }

    public class Main
    {
        private int _itemIndex;
        private List _listReal;

        public Main()
        {
            _itemIndex = 0;
            _listReal = new List();
        }

        private void Button_Click(object sender, RoutedEventArgs e)
        {
            AddOneItem();
        }

        private void AddOneItem()
        {
            SuperResult super = GetSuperResult();
            RealResult real = new RealResult();
            real.Detail = super.Detail;    // We set reference here.  Normally it should be fine
            _listReal.Add(real);
            listboxResult.Items.Add(real.Detail.Name);
        }

        private SuperResult GetSuperResult()
        {
            _itemIndex++;
            SuperResult result = new SuperResult();
            result.Detail.Super = result;        // This one sets Detail point to Super, so Super won't get released
            result.Detail.Name = "Item" + _itemIndex;
            return result;
        }
    }

Then I used dotMemory again to verify the behavior of this test project.  After I clicked button and added one item, the memory jumped up 10M and was never released, even after I explicitly forced GC to collect.

The fix is pretty easy.  First, try not to use the Super property of the Detail class.  EF provides this for convenience, but I don't see it's so necessary.  So if you remove result.Detail.Super = result, the Super will be released even the Detail is still held by RealResult.

If you have to use that, then when you assign super.Detail to real.Detail, don't assign the reference, just clone the class and copy the values.  This also disconnect Super and RealResult, so Super could be released.

After I fixed this part, a big chunk of memory was released.  The application could handle 120 data files and no memory exception was thrown.


Conclusion

Since Entity Framework makes the parent-children classes reference each other by default, which is convenient sometimes, but it's also dangerous if it is abused.  So only use the parent property when it's needed.

Assigning a reference is simple and natural, but sometimes it's better to clone and copy data values.  It may mean more code, but this will cut the tie between the calling and called modules.  When the application becomes more complex and involves multiple modules tangling together, deep copy and clone make the application decoupled more than assigning a reference.

Wednesday, April 10, 2013

ObservableCollection performance issue

When a collection is needed for data binding in a WPF/Silverlight application, ObservableCollection is the class to use.  Normally ObservableCollection is good enough to handle the data binding thing.  However if the collection goes very big, and the performance becomes an issue, we may want to look for other options.

Background

I had a list which could contain up to more than ten thousands records.  In my case, the file list brought some performance issues and took 20 seconds or so to fill the file list.  In my personal opinion, 20 seconds is not terribly bad considering what you have to do in this time.  I needed to do some really time-consuming work during this time.  In order not to freeze GUI elements in this time span, I used 2 separate background workers to do the background work, respectively.  Then the main thread only updates the GUI elements.

Although I found I could not cut much time from that 2 background threads, I did notice updating GUI took a long time and the list was refreshing so frequently.  The issue is related with the design of the ObservableCollection.  When I used ILSpy to check the ObservableCollection class, I found it has one CollectionChanged event and 2 PropertyChanged events.  Every time an item is added or removed, those events are fired.  So that means if ten thousand files are added, 30 thousand events are fired.  These fired events will cause the GUI elements to refresh, which could be very time-consuming.  In my case, real-time refreshing was not so necessary.  The better solution is we only refresh GUI after all items are added or a fixed amount of items are added.  This could save a lot of time.

RangeObservableCollection class

First, a change to ObservableCollection is a good start. We need the bulk add and delete operations.  There are several code examples in the Internet, but this one from
looks simple and neat.  However, after checking this discussion, I found every add operation in peteohanlon's solution although doesn't fire CollectionChanged event, still fires PropertyChanged events.  weston had very good points in that discussion.  So I changed my RangeObservableCollection class to the following:  


    public class RangeObservableCollection<T> : ObservableCollection<T>    
    {
        public void AddRange(IEnumerable<T>
 list)
        {
            if (list == null)
                return;

            foreach (T item in list)
                Items.Add(item);

            SendNotifications();
        }

        public void RemoveRange(IEnumerable<T>
 list)
        {
            if (list == null)
                return;

            foreach (T item in list)
                Items.Remove(item);

            SendNotifications();
        }

        private void SendNotifications()
        {
            OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
            OnPropertyChanged(new PropertyChangedEventArgs("Count"));
            OnPropertyChanged(new PropertyChangedEventArgs("Item[]"));
        }
    }

I also wrote some unit test methods to test how many events are fired and how the performance is.  I could prove that my class only fired one CollectionChanged event and 2 PropertyChanged events for every bulk add.  For the performance test, I just simply prepared a list which had 1 million records, then added this list to different range classes.  In my test, adding one by one to ObservableCollection took 0.230 second, but AddRange to my RangeObservableCollection only took 0.072 second.  When I tested 
peteohanlon's class, it almost took the same time with the traditional ObservableCollection class.  So I guess PropertyChanged events do take some resources.

Again my test was just simple to test the collection operation, not related with any GUI updates.  I guess the main advantage of bulk add and delete is we can save the GUI updating which could be critical to the performance.  In data binding reality, GUI elements should respond to all the PropertyChanged and CollectionChanged events and cause the control to refresh, which could be a huge resource waste.


An Internal list


Furthermore, I used an internal list to keep all my data.  After a fixed amount of files are added, I called AddRange() method to add them to the RangeObservableCollection instance.  After all files are added, I called a Sort() method on my internal list to re-create the RangeObservableCollection instance.  Here the overhead is re-create the collection instance.

When deleting, I always worked on the internal list, only when all queried files were deleted, I called the Sort() function and re-created the collection instance.  Again a re-creation overhead happens here.  In my test, deleting on an internal list was much faster than directly deleting from the data bound ObservableCollection instance.

Friday, January 18, 2013

FileSystemWatcher tips

Recently in one WPF project I needed to monitor multiple folders for the possible file and folder changes.  I remembered in VC++ we had to use Win32 API to create our own thread data and run it in a different thread.  In .NET world it seems FileSystemWatcher is the only and reasonable choice, and you don't have to run multiple threads by yourself.  .NET framework will manage the monitoring thread for you.  However, when I started using it, I found some issues which could eventually affect how and whether you can use it.  I listed some concerns and tips here.  We may reference this tips, which is some basic stuff you may be interested in.  I may have another file to list the related code.

1. Some events will be fired multiple times.

When you rename a file, you could get several events fired.  This is a known issue for file watchers.  If you process the changes in the event handler, you could handle multiple times for one change.  A good choice is to group the changes together, and then only process them once.  So a Timer may be good in this situation.  I will discuss Timer in another paragraph.

2. The monitored folder name change event is not fired.

When I debugged and found there was no events fired when the monitored folder was renamed, I was really frustrated.  Actually, FileSystemWatcher does catch the event and furthermore changes the monitored folder to the new folder and starts monitoring the new folder.  But you just cannot catch it.  So this is the design behaviour, but apparent not what I wanted, because I needed to display the new folder name immediately.  So to monitor the renaming event is a must.

The original thought came from this thread.  The idea is to create another watcher to monitor the folder's parent folder, and only watch the directory name renaming event.  Meanwhile you can specify a filter to only watch the sub folder you are interested in. The following is code to create the parent watcher.

            _parentWatcher = new FileSystemWatcher();
            _parentWatcher.Path = (Directory.GetParent(_watchedFolder)).FullName;
            string filter = _watchedFolder.Substring(_watchedFolder.LastIndexOf('\\') + 1);
            _parentWatcher.Filter = filter;
            _parentWatcher.IncludeSubdirectories = false;
            _parentWatcher.Error += Watcher_Error;
            _parentWatcher.NotifyFilter = NotifyFilters.DirectoryName;
            _parentWatcher.EnableRaisingEvents = true;

3. You cannot rename or delete the monitored folder's parent folders.

Let's say you are watching folder A.  You are OK to change any files under A, rename folder A, and maybe delete A.  But you can not rename A's parent B, B's parent C, and so on.  Check this thread.  When you try to rename it from Windows Explorer, you will get the following infamous message:
The action can't be completed because the folder or a file in it is open in another program

This is ridiculous since no file or folder is opened, just the folder is monitored.  Again this is a design behaviour.

A workaround here could be you go ahead to watch your entire drive, let's say C:\ or D:\.  As long as this guy doesn't have a parent, you don't worry about renaming a parent folder.  But this probably brings performance issue, because you watch many unnecessary changes, especially for network drives.

4. Use a Timer to group multiple events

To be honest, I am not a fan of Timers.  I always feel Timers are low classes in the system and not reliable.  Maybe I am wrong, but I just have that feeling.  But in some cases where missing an event is not that critical, Timers still can do the work.  We should notice there are at least 3 timers:  System.Timers.Timer, System.Threading.Timer, and System.Windows.Threading.DispatcherTimer.  This thread discussing Timers may be useful.  I used System.Timers.Timer in this case.  Somebody mentioned we should use DispatcherTimer, but it turns out DispatcherTimer behaves differently with Timer.

When you respond to every event, you stop and restart the timer to wait for the same period, so this probably can group all changes together and fire the final change request.

Another thing needs to notice is in the Timer elapsed event, we should call

            _uiDispatcher.BeginInvoke(new Action(() => {
                Messenger.Default.Send("StartRefreshing", "StartRefreshing");
            }));

not just simply send the message.
              Messenger.Default.Send("StartRefreshing", "StartRefreshing");
  
The difference is the later will send the message to a background thread but the former will send the event to the main ui thread.  When you want to change UI stuff in the responding function, in the later case you will get
              The calling thread cannot access this object because a different thread owns it
  
Because in WPF only the main GUI thread can change the GUI elements.  We can use Dispatcher.Invoke or BeginInvoke to execute a delegate in the dispatcher thread.   In the middle of this work, I wanted to use DispatcherTimer, but it turned out the timer is not fired.  Check this thread and this thread to see the possible reason why this timer is not fired.  The dispatcher timer is created in one thread and will only fire events in that thread and only the dispatcher of that thread can access these events.

5. Do we need explicit multiple threads?

Here comes another concern users normally have, do we need explicitly to put the file monitor to another thread?  Check this discussion.  The answer is no, because .NET framework will handle it.  The class will create a thread if it needs that.  So unless necessary, you don't have to create a thread to put the file monitor in it.  This is different with the old-time Win32 way and of course a nice improvement.


Sunday, January 3, 2010

Silverlight and JavaScript interaction

In Silverlight business applications, we may need to pass data to the Silverlight control.  If the volume of data is not big, we can consider using Silverlight HTML Bridge to communicate between JavaScript and Silverlight. This post summarizes how to call JavaScript from Silverlight and vice versa.

Silverlight HTML Bridge is the technology that enables to call 
JavaScript from Silverlight managed code and call Silverlight managed code from JavaScript.  The following example will use some simple code to demonstrate the 2 parts and point out some usual difficulties developers may encounter.

Call Silverlight from JavaScript

In the test.aspx file, I have the following form: 
  
    <form id="form1" runat="server" style="height:100%;">
        <asp:ScriptManager ID="ScriptManager1" runat="server">                          </asp:ScriptManager>
        <div>
            <input type="button" value="Start Silverlight" onclick="startSilverlight();" />
            <input type="hidden" id="hiddenString" value="0.85,0.78,0.10,0.12,0.90" />
        </div>
        <div style="height:100%;">
            <asp:Silverlight ID="Xaml1" Visible="true" runat="server" Source="~/ClientBin/SilverlightTest.xap"                             MinimumVersion="2.0.31005.0" Width="100%" Height="100%" />
        </div>

    </form>

And I also have the following JavaScript in the same page: 
<script type="text/javascript" language="javascript">
    function startSilverlight() {
        var plugin = document.getElementById("Xaml1");
        if (plugin == null) {
            alert("xaml1 is null");
            return;
       
}
        var strControl = document.getElementById("hiddenString");
       
var str = strControl.value;
        plugin.Content.Page.StartShow(str);
    }

</script>

In Silverlight, the code-behind file contains the following code:
    [ScriptableType]
    public partial class Page : UserControl
    {
        public Page()
        {
            InitializeComponent();
            HtmlPage.RegisterScriptableObject("Page", this);
        } 
        [ScriptableMember]
        public void StartShow(string data)
        {
            char[] charSeparators = { ',' };
            string[] result;
            result = data.Split(charSeparators, StringSplitOptions.RemoveEmptyEntries);
            float values[] = new float[result.Length];
            int i = 0;
            foreach (var str in result)
            {
                values[i++] = float.Parse(str);
            } 
            UpdateLayout(values);
        }
    }

When users click the button on test.aspx page, the hidden text can be passed to Silverlight.

Call JavaScript from Silverlight

In test.aspx, I added
<script type="text/javascript" language="javascript">
    function getSilverlightValue(text) {
        alert(text);
    }

</script>

In Silverlight code-behind file, use HtmlPage to call the function: 
    HtmlPage.Window.Invoke("getSilverlightValue", "A test string");

Final Words

This is all the things related with how JavaScript and Silverlight interact. However, what I want to point out here is this convenient method may have some performance issues when you want to pass high volume of data to Silverlight.  In my real project, I passed over 30 Mega pure data to Silverlight and I needed to parse the data, so the data needed over 10 seconds to be fully displayed.  Some users may not want this experience.  If that's the case, you may seek help from WCF to retrieve data in an asynchronous way.