Friday, April 29, 2011

applying function on a value returned from MarkupExtension in xaml

Is there a way to apply a function on a value returned from a MarkupExtenstion such as {Binding ... } or {StaticResource ...} in XAML? Usage example: making the font size of button1 to be twice bigger than the font size of button2?

From stackoverflow

Determine state of keyboard in non-keyboard-event method call.

Hi

In a textbox.TextChanged event I want to prevent some specific (processor intensive) functionality from executing if a key is still held down. E.g. while the user is deleting large regions of text by holding down the delete key I want to prevent the code from firing if the delete key is still held down.

I looked into Control.ModifierKeys which is the closest thing a google search kicks up, but in my specific case I want to trap if the delete or backspace key are pressed - which ModifierKeys doesn't provide.

Does anyone know how to do this? Ideally something which doesn't involve me having to track the state of keyup/keydown for each and every textbox (although if there is no other way I'll do it like that)

From stackoverflow
  • Why not handle the KeyUp event instead of handling KeyPress. That way you can always be sure that the key has already been depressed and is in the process of being... errr.. released (for lack of a better word!)

    Ash : Its an idea, but initially I'm hooking into the TextChanged event on the textbox as it copes with things like Cut & Paste and text changed programatically.
  • If your wanting to known the current keyboard state (via KeyPreview), can't you attach the event on the main form.

    Then check against the stored state from the textbox event handler.

  • Overriding Control.ProcessKeyPreview should do it.

    Ash : I've just looked into this and in this case I'd rather not have to subclass the control just for this functionality.
    leppie : And why is that an issue? If 10 lines of extra code is more bothersome than fixing your problem, then there are bigger problems...
  • In the end I've resorted to the external function GetKeyboardState

    So for future reference i'm doing something along the lines:

    [DllImport("user32.dll")] public static extern int GetKeyboardState(byte[] lpKeyState);
    
    private void myFunc()
    {
        byte[] keyboardState = new byte[255];
        int keystate = GetKeyboardState(keyboardState);
    
        if (keyboardState[(int)Keys.Back] == 128)
        {
           // backspace still pressed
           return;
        }
        else if (keyboardState[(int)Keys.Delete] == 128)
        {
           // Delete key still pressed
           return;
        }
    
        // More processor intensive code ...
    }
    

    I am not sure of the function call overhead, but I'm pretty sure it will be less than the processing the function would perform were it not there.

initialize variable involving vector data type

I have the following data types and variables:

typedef Seq< vector<int> > MxInt2d;
typedef std::vector<int>  edge_t;
typedef std::vector< edge_t> edge2d_t;

std::vector< edge2d_t > myEdgesIntersect;

I tried to initialize myEdgesIntersect like:

edge2d_t edge2d(2);

 //creating the vector of edges of intersections whenever an intersection is detected
for (int i=0;i<1;i++){
 edge2d[0][0]=sweepEvents[i][0];
 edge2d[0][1]=sweepEvents[i][1];
 edge2d[1][0]=sweepEvents[i+1][0];
 edge2d[1][1]=sweepEvents[i+1][1];
 std::cout<<edge2d[0][0]<<" "<<edge2d[0][1]<<endl;
 std::cout<<edge2d[1][0]<<" "<<edge2d[1][1]<<endl;
 myEdgesIntersect.push_back(edge2d);
 std::cout<<myEdgesIntersect[i][0][0]<<" "<<myEdgesIntersect[i][0][1]
            <<"    "<<myEdgesIntersect[i][1][0]<<" "<<myEdgesIntersect[i][1][1]<<endl;
}

But using this syntax when I try to display the variable myEdgesIntersect this is not initialized with the given values of edge2d[..][..] (which during the display are okay). I tried to display the variable myEdgesIntersect before the push_back and I got an bus error, so I think the problem is that the variable is not initialized. I tried to initialize it like:

 edge2d_t edge2d;
 edge2d[0][0]=0;
 edge2d[0][0]=0;
 edge2d[0][0]=0;
 edge2d[0][0]=0;
 edge2d[0][0]=0;
 myEdgesIntersect.push_back(edge2d);

but I got the same error, as actually is the same thing as in the loop. Apparently I do not know how to initialize this quite complicated variable that I really need. If you have any suggestions I would be more than happy.

thanks in advance, madalina

From stackoverflow
  • If your array size really is fixed at compile time then you may be better off looking at a 2D array, rather than a 2 element vector containing 2 2 elemnet vectors.

    John Dibling : That would probably fix the problem by avoiding it, but it doesn't address why his code is wrong.
  • edge2d_t is a vector of vectors. In your first code block you set the size of the outer vector when you instantiate your edge2d variable, but not the inner vectors, so they are all size 0.

    try this:

    edge2d_t edge2d(2);
    edge2d[0].resize(2);
    edge2d[1].resize(2);
    
     //creating the vector of edges of intersections whenever an intersection is detected
    for (int i=0;i<1;i++){
            edge2d[0][0]=sweepEvents[i][0];
            edge2d[0][1]=sweepEvents[i][1];
            edge2d[1][0]=sweepEvents[i+1][0];
            edge2d[1][1]=sweepEvents[i+1][1];
            std::cout<<edge2d[0][0]<<" "<<edge2d[0][1]<<endl;
            std::cout<<edge2d[1][0]<<" "<<edge2d[1][1]<<endl;
            myEdgesIntersect.push_back(edge2d);
            std::cout<<myEdgesIntersect[i][0][0]<<" "<<myEdgesIntersect[i][0][1]
                <<"    "<<myEdgesIntersect[i][1][0]<<" "<<myEdgesIntersect[i][1][1]<<endl;
    }
    
    madalina : yes it worked. thanks, madalina
    John Dibling : If this worked for you, please accept the answer. Thanks!
  • Try:

    edge2d_t ev(10, edge_t(10, 0));
    

    (change the size from 10 to something that fits you.)

  • In addition to what John said, I suspect your 'for' loop may have an off-by-one error:

    for (int i=0;i<1;i++){ // i will only be 0
    

    perhaps you want

        for (int i=0;i<=1;i++){ // i will iterate 0,1
    

Monitor cpu usage per thread in java?

Hi, I would like to ask whether there is some simple way to determine cpu usage per thread in java. Thanks

From stackoverflow
  • Though this is platform dependent, I believe what you're looking for is the ThreadMXBean: http://java.sun.com/j2se/1.5.0/docs/api/java/lang/management/ThreadMXBean.html . You can use the getThreadUserTime method, for example, to get what you need. To check if your platform supports CPU measurement, you can call isThreadCpuTimeSupported() .

  • I believe the JConsole does provide this kind of information through a plugin

    It uses ThreadMXBean getThreadCpuTime() function.

    Something along the line of:

            long upTime = runtimeProxy.getUptime();
            List<Long> threadCpuTime = new ArrayList<Long>();
            for (int i = 0; i < threadIds.size(); i++) {
                long threadId = threadIds.get(i);
                if (threadId != -1) {
                    threadCpuTime.add(threadProxy.getThreadCpuTime(threadId));
                } else {
                    threadCpuTime.add(0L);
                }
            }
            int nCPUs = osProxy.getAvailableProcessors();
            List<Float> cpuUsageList = new ArrayList<Float>();
            if (prevUpTime > 0L && upTime > prevUpTime) {
                // elapsedTime is in ms
                long elapsedTime = upTime - prevUpTime;
                for (int i = 0; i < threadIds.size(); i++) {
                    // elapsedCpu is in ns
                    long elapsedCpu = threadCpuTime.get(i) - prevThreadCpuTime.get(i);
                    // cpuUsage could go higher than 100% because elapsedTime
                    // and elapsedCpu are not fetched simultaneously. Limit to
                    // 99% to avoid Chart showing a scale from 0% to 200%.
                    float cpuUsage = Math.min(99F, elapsedCpu / (elapsedTime * 1000000F * nCPUs));
                    cpuUsageList.add(cpuUsage);
                }
            }
    
  • by using java.lang.management.ThreadMXBean. How to obtain a ThreadMXBean:

     ThreadMXBean tmxb = ManagementFactory.getThreadMXBean();
    

    then you can query how much a specific thread is consuming by using:

     long cpuTime = tmxb.getThreadCpuTime(aThreadID);
    

    Hope it helps.

  • Indeed the object ThreadMXBean provides the functionality you need (however it might not be implemented on all virtual machines).

    In JDK 1.5 there was a demo program doing exactly what you need. It was in the folder demo/management and it was called JTop.java

    Unfortnately, it's not there in Java6. Maybe you can find at with google or download JDK5.

  • Hi,

    Interresting post. Here is where recover this JTop.java : http://rejeev.googlepages.com/JTop.jar

    I've successfully installed and used it. However, I do not manage to retrieve the same percentage values from my own ThreadMXBean bean instance : => one difference that I can see is that my .java program is running "locally" (together with the threads to monitor, on the same JVM instance) whereas Jconsole is run "remotely" (on its own JVM instance).

    But in my case, unfortunately, I have to use the local manner : => would you be aware about any problem when monitoring our Java process locally ?

    Any help being very appreciated,

    Regards.

Crash reporting watchdog for when my application locks up on a customer's machine

I'm working with a somewhat unreliable (Qt/windows) application partly written for us by a third party (just trying to shift the blame there). Their latest version is more stable. Sort of. We're getting fewer reports of crashes, but we're getting lots of reports of it just hanging and never coming back. The circumstances are varied, and with the little information we can gather, we haven't been able to reproduce the problems.

So ideally, I'd like to create some sort of watchdog which notices that the application has locked up, and offers to send a crash report back to us. Nice idea, but there are problems:

  • How does the watchdog know the process has hung? Presumably we instrument the application to periodically say "all ok" to the watchdog, but where do we put that such that it's guarenteed to happen frequently enough, but isn't likely to be on a code path that the app ends up on when it's locked.

  • What information should the watchdog report when a crash happens? Windows has a decent debug api, so I'm confident that all the interesting data is accessible, but I'm not sure what would be useful for tracking down the problems.

From stackoverflow
  • I think a separate app to do the watchdogging is likely to produce more problems than it solves. I'd suggest that instead, you first create handlers to generate minidumps when the app crashes, then add a watchdog thread to the application, which will DELIBERATELY crash if the app goes off the rails. The advantage to the watchdog thread (vs a different app) is that it should be easier for the watchdog to know for sure that the app has gone off the rails.

    Once you have the MiniDumps, you can poke around to find out the app's state when it dies. This should give you enough clues to figure out the problem, or at least where to look next.

    There's some stuff at CodeProject about MiniDumps, which could be a useful example. MSDN has more information about them as well.

    John Dibling : You dont have to crash the app in order to create the minidumps. You can call MiniDumpWriteDump() any time.
  • You want a combination of a minidump (use DrWatson to create these if you don't want to add your own mini-dump generation code) and userdump to trigger a minidump creation on a hang.

    The thing about automatically detecting a hang is that its difficult to decide when somethings hung and when its just slow or blocked by IO wait. I personally prefer to allow the user to crash the app deliberately when they think its hung. Apart from being a lot easier (my apps don't tend to hang often, if at all :) ), it also helps them to "be part of the solution". They like that.

    Firstly, check out the classic bugslayer article concerning crashdumps and symbols, which also has some excellent information regarding what's going on with these things.

    Second, get userdump which allows you to create the dumps, and instructions for setting it up to generate dumps

    When you have the dump, open it in WinDBG, and you will be able to inspect the entire program state - including threads and callstacks, registers, memory and parameters to functions. I think you'll be particularly interested in using the "~*kp" command in Windbg to get the callstack of every thread, and the "!locks" command to show all locking objects. I think you'll find that the hang will be due to a deadlock of synchronisation objects, which will be difficult to track down as all threads tend to wait on a WaitForSingleObject call, but look further down the callstacks to see the application threads (rather than 'framework' threads like background notifications and network routines). Once you've narrowed them down, you can see what calls were being made, possibly add some logging instrumentation to the app to try and give you more information ready for the next time it fails.

    Good luck.

    Ps. Quick google reminded me of this: Debugging deadlocks. (CDB is the command line equivalent of windbg)

  • You can use ADPlus from Microsoft's Debugging Tools for Windows to identify the hangs. It will attach to your process and create a dump (mini or full) when the process hangs or crashes.

    WinDbg is portable, and does not have to be installed (you do have to configure the symbols, though). You can create a special installation that will launch your app using a batch, which will also run ADPlus after your app starts (ADPlus is a commandline tool, so you should be able to find a way to incorporate it somehow).

    BTW, if you do find a way to recognize the hang internally and are able to crash the process, you can register with Windows Error Reporting so that the crash dump will be sent to you (should the user allow it).

  • Don't bother with a watchdog. Subscribe to Microsoft's Windows Error Reproting (winqual.microsoft.com). They'll collect the stacktraces for you. In fact, it's quite likely they're already doing so today; they don't share them until you sign up.

flex air datagrid setfocus cell by cell

Hi all,

I have a datagrid with custom itemRenderer. Now I need to setfocus int the grid cell by cell. For that I Googled & got a way i.e

var findrowindex:int = 0;

//nextButton Click Handler
var focusedCell: Object = new Object();
focusedCell. columnIndex = 3;
focusedCell. rowIndex = findrowindex;
dg.editedItemPosition = focusedCell; 
dg.validateNow( );
findrowindex++;

Using this I am able to get focus in a cell but the focus is not moving from one cell to another. Pls suggest me where I am going wrong or suggest me any ther way to achieve this.

Thanks.

From stackoverflow
  • Hi, if you set editable=true for those columns where you want to edit. you can move the focus by tab key. use itemEditor instead of itemRenderer. if you want the same look and feel while focus is in and focus is out. use combination of itemRenderer and itemEditor Regards, Arivu

supporting persistent HTTP connections in my proxy server

Hi all, I am implementing a HTTP caching proxy server in C++.I am done with most part of it but i am stuck at a point.
What i am doing is creating each thread with a socket to handle each time a request from browser comes. I parse the request, check for its availability in cache and if not found forward it to end www server.In both cases i write the response received on the connected socket. Now the problem is until and unless i close the socket, the browser doesn't assumes the transfer to be complete and waits indefinitely.
This way I can't use a socket for more than one connection, in other words I can't support persistent connections.
Any help will be appreciated..

Thanks,

From stackoverflow
  • What headers are you sending back to the client?

    You should be including:

    Content-Length: ...
    Keep-Alive: timeout=..., max=...
    Connection: Keep-Alive
    

    In particular, the Content-Length header is essential with persistent connections so that the client knows how much data to read. See section 8.1.2.1 of RFC 2616.

    Alternatively, if you want to tell the client to break the connection, send:

    Connection: close
    
  • Now the problem is until and unless i close the socket, the browser doesn't assumes the transfer to be complete and waits indefinitely.

    Right. HTTP 1.1 uses Keep-Alive by default.

    This way I can't use a socket for more than one connection, in other words I can't support persistent connections.

    I'm not sure I understand you, because that persistent connection you have IS a persistent connection.

How to open VS 2008 solution in VS 2005?

I have seen Solutions created in Visual Studio 2008 cannot be opened in Visual Studio 2005 and tried workaround 1. Yet to try the workaround 2.

But as that link was bit old and out of desperation asking here: Is there any convertor available?

From stackoverflow
  • I'd say you should restore your 2005 version from source control, assuming you have source control and a 2005 copy of the file.

    Otherwise, there are plenty of pages on the net that details the changes, but unfortunately no ready-made converter program that will do it for you.

    Be aware that as soon as you open the file in 2008 again, it'll be upgraded once more.

    Perhaps the solution (no pun intended) is to keep separate copy of the project and solution files for 2005 and 2008?

    Why do you want to open it in 2005 anyway?

  • I dont have VS2008 yet and i wanted to open an opensource solution which was done in vs2008.

    Guess i have to fiddle around or wait till the vs2008 is shipped.

  • Here's a visual studio 2008 to 2005 downgrade tool And another one.

    I haven't tried either of these, so please report back if they are successful for you ;-)

  • Leon says

    Here's a visual studio 2008 to 2005 downgrade tool And another one.

    Both use the workaround 1 that i linked in my question. So i didnt have any luck with them :(

  • You can download and use Visual Studio 2008 Express editions. They're free...

  • I have a project that I work on in both VS 2005 and VS 2008. The trick is just to have to different solution files, and to make sure they stay in sync. Remember that projects keep track of their files, so the main thing solutions do is keep track of which projects they contain; pretty easy to keep in sync.

    So just create a new blank solution in VS 2005, and then add each of your projects to it, one by one. Be sure to name the solutions appropriately. (I call mine ProjectName.sln and ProjectNameVs2008.sln.)

    Which is a long way of saying you should try workaround #2.

  • Hey Guys,

    I could resolve the problem of opening a VS 2008 web service project in VS2005.

    Steps to follow - creates a new web service project in vs 2005. - Compile the project. - OPen the Project file in notepad. - Copy the bold font lines ** - Debug AnyCPU 8.0.50727 2.0 {3C596F22-0A57-4B9A-ABD3-C2BEFA5DA0B7} {349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc} Library Properties WebService1 WebService1 true full false bin\ DEBUG;TRACE prompt 4 pdbonly true bin\ TRACE prompt 4

    ** Service1.asmx Component ** -

    ** --> False True 3124 / False

    1. Paste them over the webservice file created in the vs 2008. Do not replace the whole file your ref will go replace only where version is given.

    My problem is resolved following these i am sure your's will also be resolved

  • No way to do this... :-(

How to get next sequence number in DB2 using Entity Framework?

I want to retrieve the next sequence number via an adhoc query in Entity Framework. I'm using:

LogEntities db = new LogEntities();

ObjectQuery<Int64> seq = db.CreateQuery<Int64>("SELECT AUDITLOG.EVENTID_SEQ.NEXTVAL from sysibm.sysdummy1");

This returns the following error:

ErrorDescription = "'sysibm.sysdummy1' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly."

I guess this is because sysdummy1 is not a mapped table in my model.

Anyone know a way I can perform this query using the LINQ 2 Entity context?

From stackoverflow
  • An ObjectQuery needs to use Entity SQL, not "regular" SQL. If you want to write "regular" SQL, you need to use a store connection, not an ObjectQuery. That said, I kind of wonder why you're manually retrieving a sequence number. If the entity property is set to auto increment on the server, this will the retrieved automatically when you SaveChanges. If you need to get a store connection, there is a method on the EntityConnection type, CreateDbCommand, which does this. Again, I recommend that you don't do it. Using this feature makes your code provider-specific. Most Entity Framework code should be provider-agnostic.

  • Thanks for your answer Craig. The reason I am unable to use an auto incrementing identity column is because this particular logical table is physically partitioned into 31 separate (daily) tables and the ID needs to be unique across all tables.

    I ended up creating a stored procedure to retrieve the next number from the sequence, and then adding that to my EF Model store.

    private static long GetNextEventId(DbConnection dbConnection)
    {
        using (DbCommand cmd = dbConnection.CreateCommand())
        {
            cmd.CommandText = "LogEntities.GetNextEventId";
            cmd.CommandType = CommandType.StoredProcedure;
    
            // Execute the command
            return Convert.ToInt64(cmd.ExecuteScalar());
        }
    }
    

    Here's another way of call a stored proc in EF that returns a scalar.

    Using Stored Procedures That Return Non Entity Type

  • Another option would be to create a table with an identity column and use the value from identity val function after the insert.

Flex Write File

I checked in the API, that writing a file is only in Flex Air. Despite of that, is there still a way in Flex to write a file on the server or on the client machine ?

More concretely, it's because I have a String and I will decode it by mx.utils.Base64Decoder to a doc. And I need to open it by Word directly on the client side, or write it on the server and return the link to the client for downloading.

Thanks

From stackoverflow
  • AIR applications run on the users' desktops. They allow you to write to the users' desktops. Nothing stops Flex/AIR applications from communicating to the server, write some files there (using a suitable backend technology) and return a link back to the user.

    maoanz : I meant not using air, only Flex web application. Or if it's possible to run an Air application by the Flex. Thanks for replying
    dirkgently : @maoanz: As I said, it really shouldn't be difficult since even Flex allows you to upload files. Do you need any specific API references?
  • I'm doing something similar - I have built a custom grid in Flex that contains an additional button on it that allows the user to open the data up in Excel. I've done this by writing a function in PHP on the server which my Flex speaks to. This creates the CSV file on the server and the function passes the link back to the client, which in turn offers the user the chance to open the data up in Excel.

    I'm using the Zend technologies to accomplish this - I recommend you take a look; they're extremely good. I used to use Flex Data Services and Java do do this but just recently switched to PHP because development time is cut down drastically as a result.

    Hope this helps.

    Jamie.

Do I need to run Boost's configure script to build libraries (e.g. graph, test)?

If I don't want to install Boost but I want to build one of the libraries (i.e. BGL, for using graphml) is there a way to build the library in a system independent manner without running .configure?

From stackoverflow
  • The short answer is: it depends.

    I'd just try to copy the relevant sources to your project and build them. Boost is fairly good at supporting that sort of thing, as it doesn't rely too much on configure, but rather on compiler-/platform-specific defines to sort out which platform it's being built on.

    I suspect that something like BGL would even be a better candidate, while highly platform-dependent code like the threading code wouldn't work as well. Having said that, I know at least one project that successfully copies boost's threading code and spirit into it's own repository, and it works like a charm.

    I'd just give it a try and see what happens.

TemplateBindings in Custom Controls

I'm just mucking about with custom controls in silverlight and for the life of me i can't get the TemplateBindings to work. Can someone give this reduced version a once over to see if I'm missing something.

So my ControlTemplate in the generic.xaml looks like

<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:local="clr-namespace:NumericStepperControl;assembly=NumericStepperControl">
    <Style TargetType="local:NumericStepper">
        <Setter Property="Template">
            <Setter.Value>
                <ControlTemplate TargetType="local:NumericStepper">
                    <Grid>
                        <Grid.ColumnDefinitions>
                            <ColumnDefinition />
                            <ColumnDefinition />
                        </Grid.ColumnDefinitions>

                        <Border Grid.Column="0" BorderBrush="Black" BorderThickness="2"  Width="50" Height="30">
                            <TextBlock Width="50" Height="30" Text="{TemplateBinding Value}" />
                        </Border>
                    </Grid>
                </ControlTemplate>
            </Setter.Value>        
        </Setter>
    </Style>
</ResourceDictionary>

and my control class looks like:

namespace NumericStepperControl
{
    public class NumericStepper : Control
    {
        public static readonly DependencyProperty ValueProperty = DependencyProperty.Register("Value", typeof(int), typeof(NumericStepper), new PropertyMetadata(20));

        public NumericStepper()
            : base()
        {
            DefaultStyleKey = typeof( NumericStepper );
        }

        public int Value
        {
            get
            {
                return (int)GetValue(ValueProperty);
            }
            set
            {
                SetValue(ValueProperty, value);
            }
        }
    }
}

I'm expecting when this runs the TextBlock will display the number 20. Any ideas as to why this isn't working?

As a side not i have a separate project which contains a ref to the NumericStepperControl assembly and when it runs the controls seem to build correctly.

Edit... after a bit more investigation i have discovered that if i change the type of the Value property to a string that works fine. Why does a text block not just call a toString on whatever is passed into it? Is there a way round this as i can see it happening a lot?

From stackoverflow
  • After a bit of digging it turns out that the TextBlock actually doesn't call ToString on whatever is passed in. To work around this you must use a Converter to call a ToString for you.

    Here's the rub though, TemplateBinding doesn't support Converters. You have to add the TemplateBinding to the DataContext and then use normal Binding in the Text property along with the converter.

    So the TextBlock markup becomes

     <TextBlock Width="50" Height="30" DataContext="{TemplateBinding Value}"  Text="{Binding Converter={StaticResource NumberTypeToStringConverter}}" />
    

    My custom converter:

    public class NumberTypeToStringConverter : IValueConverter
        {
            public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
            {
                if (value == null)
                {
                    throw new NullReferenceException();
                } 
    
                return value.ToString(); 
            }
    
            public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
            {
                MethodInfo methodInfo = targetType.GetMethod("Parse");
    
                if (methodInfo == null)
                {
                    throw new MissingMethodException("The targetType to convert back to a Number must implement a Parse method");
                }
    
                return methodInfo.Invoke(null, new object[] { value });
            }
        }
    

    This seems like a bit of a work around and i'd be interested to hear if it has any adverse implications. Also if anyone is reading this and there is anything wrong with my converter please let me know.

    Cheers

    Mark Cooper : I just burnt almost a day trying to figure this out. Thanks for posting your own answer James - very diligent!!
  • There are different approaches to get arround the problem. Found this description by Marek Latuskiewicz.

How can I set permissions on an RSS file?

I want to create an in-house RSS feed (I work for 3 Mobile, Australia) for consumption on an INQ1 mobile phone, or any other RSS reader for that matter. However, testing it out on the phone's built-in RSS reader, I realize that without the ability to password protect the feed, or otherwise restrict access to it, I stand little chance of being able to develop this idea further.

One thing I thought of was to periodically change the Uri for the feed, so managers who had left the company couldn't continue to subscribe and see sensitive information, but the idea of making users do that would make it a harder sell, and furthermore is terribly inelegant.

Does anybody know how to make it so that prior to downloading a feed, a reader would have to authenticate the user? Is it part of the metadata within the feed, or something you would set in the reader software?

Update: I should have explained that I already have placed folder-level permissions on the parent folder, which brings up the normal authentication dialog when the feed is viewed in a browser, but which just results in a failed update with no explanation or warning in the phone's RSS reader, and is indistiguishable from the file being missing, when I next try and refresh the feed.

From stackoverflow
  • I believe you would set the permissions on the feed itself, forcing authentication, much like the Twitter feeds. The problem with this is that many readers (including Google Reader) don't yet support authenticated feeds.

  • The idea is to have authentication over a secure channel. These posts explain it pretty well:

  • Assuming your RSS feed is over HTTP then basic HTTP authentication would probably do the trick. This would either be done at the web server level (in IIS for example) or via whatever framework you're using to produce the feed (in ASP.NET for example).

    The authentication scheme (basic username/password, NTLM, Kerberos etc) is up to you. If you're using WCF to produce the feed then these decisions are things you can make later and apply via config if needed.

    Are you simply looking to authenticate consumers of the feed, or also encrypt it to prevent the information from being read by a "man in the middle". If you require encryption then SSL is probably the easiest to implement.

    You should avoid simply "hiding" the RSS feed by changing it's name.

    update: Your question (with it's update) sounds like you're actually having issues with the RSS client on the device. You need to determine whether the phones RSS client understands how to deal with basic/digest authentication etc.

    Assuming it doesn't, is there anything in the HTTP request that could allow you to associate a device with a user? Is there an HTTP Header that gives you a unique device ID? If so, you might be able to then perform a lookup against this data to perform your own weak-authentication, but you should remember that this sort of authentication could be easily spoofed.

    Does the device have a client certificate that could be used for mutual SSL? If so, then that would be ideal.

  • Authentication by the webserver is probably the best solution, however to get round the issues of reader not supporting it (Google has been mentioned and I have issues with Safari) you could implement a simple value-key to append to the URL.

    I.E.
    
    http://www.mydomain/rss.php?key=value
    

    Your system could then "authenticate" the key-value and output the RSS, invalid k-v could get a standard "invalid authenticate" message as single item RSS or return a 40x error.

    It not very secure as you could see the k-v in the URL but it's a a trade off. An un-authenticated HTTPS would be slightly more secure.

    Rafe Lavelle : But that would still involve people having to change the value (of the key) periodically, to change the "password", which is hard to do in some mobile readers, without having to delete the Feed and totally re-enter the Uri.
    rjstelling : Could you tie it to an IP address or MAC address (or both). Do bare in mind that MAC address spoofing is very easy.
  • If the reader in the phone doesn't support HTTP Basic or Digest, your best bet is to create a unique url to the feed for each consumer. Have the customer login and generate a link with some token in it that is unique for that user. If the user ever leaves, you can then deny that token, shutting down access.

    If you go this route, you probably want to investigate including the Feed Access Control bits in your feed. It's not perfect, but it is respected by the bigger aggregators, so if one of your clients decides to subscribe to the feed with Reader or Bloglines, things shouldn't show up in search results.

    Rafe Lavelle : The unique url solution is the one I had come to myself eventually, so thanks for the support your answer has given me! My team would have to manage a list of consumers/uniqueIDs, each of which would form part of the feed's url. If someone leaves, the ID is removed. I was also thinking of having a weak reciprocal security element: each time the user updates their feed, the first item in the feed could say 'Last viewed 30 minutes ago'. If that doesn't ring true, then something's wrong.

Default value for field in Django model

Suppose I have a model:

class SomeModel(models.Model):
    id = models.AutoField(primary_key=True)
    a = models.CharField(max_length=10)
    b = models.CharField(max_length=7)

Currently I am using the defauly admin to create/edit objects of this type. How do I remove the field b from the admin so that each object cannot be created with a value, and rather will receive a default value of 0000000?

From stackoverflow
  • Set editable to False and default to your default value.

    http://docs.djangoproject.com/en/dev/ref/models/fields/#editable

    b = models.CharField(max_length=7, default='0000000', editable=False)
    

    Also, your id field is unnecessary. Django will add it automatically.

    Yuval A : yeah I know about the id field. Thanks for the help!
  • You can set the default like this:

    b = models.CharField(max_length=7,default="foobar")
    

    and then you can hide the field with your model's Admin class like this:

    class SomeModelAdmin(admin.ModelAdmin):
        exclude = ("b")
    

How to enumerate column names with NHibernate?

I've got a class with a bunch of [ColumnName("foo")] NHibernate attributes. Is there an easy way to ask NHibernate to list all of the ColumnNames for a given class?

It sounds like it should be really easy but I'm just not seeing any kind of inspection in the NHibernate docs (or maybe I'm just blind today).

From stackoverflow
  • Use LINQ and reflection:

    var columns = typeof(TheClass).GetProperties()
        .Where(property => property.GetCustomAttributes(typeof(ColumnNameAttribute), false).Count > 0)
        .Select(property => property.Name);
    
  • Use NHibernate's Metadata

    // get an instance to the metadata 
    IClassMetadata metadata = sessionfactory.GetClassMetadata(typeof(MyEntity));
    
    // use properties and methods from the metadata:
    // metadata.PropertyNames
    // metadata.PropertyTypes
    // metadata.GetIdentifier()
    // and more
    
    // or get the metadata for all classes at once
    IDictionary allClassMetaData = factory.GetAllClassMetadata();
    metadata = allClassMetaData[typeof(MyEntity)];
    

    You get what NHibernate actually knows, independent of how it is defined; using attributes, xml mappings or FluentNHibernate. This makes it more stable and more reliable than using reflection on your own.

  • I had this same problem, but found IClassMetadata doesn't have any column information, just property types, names, identifier, and table information.

    What worked for me:

    PersistentClass persistentClass = cfg.GetClassMapping(typeof(MyEntity));
    Property property = persistentClass.GetProperty(propertyName);
    property.ColumnIterator   // <-- the column(s) for the property
    

Where to put auditing or logging?

I'm currently working on an ASP.NET MVC project using NHibernate and I need to keep track of changes on some entities in order to be able to make some reports and queries over the data. For this reason I wanted to have the data in a table, but I'm trying to decide where to "hook" the auditing code.

On the NHibernate layer:

  • PRO: Powerful event system to track any change
  • PRO: Nothing can be changed in the application without notice (unless someone uses raw SQL...)
  • CON: As I have a generic repository... then I have to filter out the useful entities (I don't need to track everything).
  • CON: I don't have easy access to the controller and the action so I can only track basic operations (update, delete...). I can get the HttpContext at least to get some info.

On an Action Filter at Controller level:

  • PRO: Full information on the request and web application status. This way I can distinguish an "edit" from a "status change" and be more descriptive in the audit information.
  • CON: Someone can forget a filter and an important action can be taken without notice which is a big con.

Any clue?

Update: See how to Create an Audit Log using NHibernate Events.

From stackoverflow
  • I'd rather put it in the data (NHibernate in your case) layer. Putting it in the controller and asking other people (or yourself, in the future) to implement controllers accordingly conflicts with object-oriented design principles.

  • I think doing this at the repository level is a much better fit. Mostly because you may, in the future, decide to add some method of access to your repository which does not go through MVC (e.g., a WCF interface to the data).

    So the question becomes, how do you address the cons you've listed about doing it on the NHibernate layer?

    Filtering out the useful entities is simple enough. I would probably do this via a custom attribute on the entity type. You can tag the entities you want to track, or the ones you don't; whichever is easier.

    Figuring out what the controller really intended is harder. I'm going to dispute that you can "get the HttpContext"; I don't think it is a good idea to do this in a repository, because the separation of concerns. The repository should not be dependent on the web. One method would be to create custom methods on the repository for actions you'd like to track differently; this is especially attractive if there are other aspects of these edits which behave differently, such as different security. Another method is to examine the changes by comparing the old and new versions of the objects and derive the actual nature of the change. A third method is to make no attempt to derive the nature of the change, but just store the before and after versions in the log so that the person who reads the log can figure it out for themselves.

    Adrian Grigore : A fourth method would be to have the controller provide a mandatory "Comments" parameter that describes that database operation.
    Peter Meyer : +1 Great suggested approaches on dealing with the CONS of the problem. I'd think I'd prefer custom repository methods for the different intents -- it not only provides the desired auditing results, but provides good expression of semantics in the code as well.
    Marc Climent : After listing the cons of the Filter I realized it was not a good idea. Thanks for the tips and I agree that I will violate the SoC accessing the HttpContext. I may use a Filter to implement the "comments" suggested by Adrian and go the NHibernate way.
  • I do this with NHibernate. Objects that require auditing implement an IAudtable interface and I use an Interceptor do the auditing on any object that implements IAuditable by intercepting OnFlushDirty, OnDelete, and OnSave.

sql function to return table of names and values given a querystring

Anyone have a t-sql function that takes a querystring from a url and returns a table of name/value pairs?

eg I have a value like this stored in my database:

foo=bar;baz=qux;x=y

and I want to produce a 2-column (key and val) table (with 3 rows in this example), like this:

name  | value
-------------
foo   | bar
baz   | qux
x     | y

UPDATE: there's a reason I need this in a t-sql function; I can't do it in application code. Perhaps I could use CLR code in the function, but I'd prefer not to.

UPDATE: by 'querystring' I mean the part of the url after the '?'. I don't mean that part of a query will be in the url; the querystring is just used as data.

From stackoverflow
  • I'm sure TSQL could be coerced to jump through this hoop for you, but why not parse the querystring in your application code where it most probably belongs?

    Then you can look at this answer for what others have done to parse querystrings into name/value pairs.

    Or this answer.

    Or this.

    Or this.

    Rory : There are reasons that in my case I need to do this from the database layer, unfortunately as obviously it would be a lot more straightforward to do using .NET code as per those links.
  • Please don't encode your query strings directly in URLs, for security reasons: anyone can easily substitute any old query to gain access to information they shouldn't have -- or worse, "DROP DATABASE;". Checking for suspicious "keywords" or things like quote characters is not a solution -- creative hackers will work around these measures, and you'll annoy everyone whose last name is "O'Reilly."

    Exceptions: in-house-only servers or public https URLS. But even then, there's no reason why you can't build the SQL query on the client side and submit it from there.

    Rory : thanks, but i meant 'querystring' as in the part of the url after the ?. I don't mean that parts of a query will be in the url.
    j_random_hacker : @Rory: Yes, that's what I mean too.
    Rory : ok, in that case your response isn't an answer to my question: I have data in my database and I want to turn it into a table of name/value pairs. I'm not putting queries into the query string.
    j_random_hacker : @Rory: Ah, I misunderstood sorry (three times actually...) So you mean you have a URL like "http://mysite.com/xyz?foo=bar;baz=qux;x=y" and you want to produce a 2-column (key and val) table (with 3 rows in this example)? I'm sure it can be done, but I'm not sure how unfortunately.
    Rory : yes exactly. no worries - i guess it shows i should write my questions clearer. thanks anyway.

Convert jpg image to gif, png & bmp format using PHP

Hi all,

How can I convert a single jpg image into 3 different image format gif , png and bmp using PHP?

Pls help

From stackoverflow
  • Use libGD — http://www.php.net/manual/en/book.image.php

    riad : But its a huge library file, Could u pls help me for a perticular one? which code i have to use?
  • You first create an image object out of your file with imagecreatefromjpeg(). You then dump that object into different formats (using imagegif() for example):

    $imageObject = imagecreatefromjpeg($imageFile);
    imagegif($imageObject, $imageFile . '.gif');
    imagepng($imageObject, $imageFile . '.png');
    imagewbmp($imageObject, $imageFile . '.bmp');
    
    riad : thanks bro.its working properly.
    Evert : Note that WBMP is not the BMP as you might know it: http://en.wikipedia.org/wiki/Wbmp
    Jeff Winkworth : great answer, exactly what i was lookking for too

Declaring an Index as unique in SQL Server

If I know an index will have unique values, how will it affect performance on inserts or selects if I declare it as such.

If the optimiser knows the index is unique how will that affect the query plan?

I understand that specifying uniquenes can serve to preserve integrity, but leaving that discussion aside for the moment, what are the perfomance consequences.

From stackoverflow
  • Of course the optimizer will take uniqueness in consideration. It affects the expected row count in query plans.

  • Yes, it will be taken into consideration by the query engine.

  • Perhaps more important: the uniqueness will protect the data integrity. Performance would a reason to ignore this.

    Performance could be affected positively or negatively or not at all: it would depends on the query, if the index is used etc

  • Performance is negatively affected when inserting data. It needs to check the uniqueness.

    kquinn : And positively affected when selecting data: the optimizer can exploit the uniqueness.
    Quassnoi : There is no performance difference between inserting a field into UNIQUE and non-UNIQUE index. The engine should parse the B-tree anyway, uniqueness just affects the decision whether to insert this value into given place in the B-tree or not.
    Michael Haren : I'm very curious about this, too. Benchmarks or credible sources would be much appreciated.
    Jonathan Leffler : Performance is negatively affected when inserting data into a non-unique index; it has to check the uniqueness or not, and deal with adding the new row into the pre-existing slot or creating a new slot. There isn't much difference.
    Stefan Steinegger : Found this thread: http://www.sqlservercentral.com/Forums/Topic651562-360-1.aspx#bm652904 "The optimizer will take into account when an index is unique and it can improve performance, but it really does depend on the query." (...) "It will slow down inserts slightly, but probably not enough to notice." I think, it doesn't matter much.
  • Long story short: if your data are intrinsically UNIQUE, you will benefit from creating a UNIQIE index on them.

    See the article in my blog for detailed explanation:


    Now, the gory details.

    As @Mehrdad said, UNIQUENESS affects the estimated row count in the plan builder.

    UNIQUE index has maximal possible selectivity, that's why:

    SELECT  *
    FROM    table1 t2, table2 t2
    WHERE   t1.id = :myid
            AND t2.unique_indexed_field = t1.value
    

    almost surely will use NESTED LOOPS, while

    SELECT  *
    FROM    table1 t2, table2 t2
    WHERE   t1.id = :myid
            AND t2.non_unique_indexed_field = t1.value
    

    may benefit from a HASH JOIN if the optimizer thinks that non_unique_indexed_field is not selective.

    If your index is CLUSTERED (i. e. the rows theirselves are contained in the index leaves) and non-UNIQUE, then a special hidden column called uniquifier is added to each index key, thus making the key larger and the index slower.

    That's why UNIQUE CLUSTERED index is in fact a little more efficicent than a non-UNIQUE CLUSTERED one.

    In Oracle, a join on UNIQUE INDEX is required for a such called key preservation, which ensures that each row from a table will be selected at most once and makes a view updatable.

    This query:

    UPDATE  (
            SELECT  *
            FROM    mytable t1, mytable t2
            WHERE   t2.reference = t1.unique_indexed_field
            )
    SET     value = other_value
    

    will work in Oracle, while this one:

    UPDATE  (
            SELECT  *
            FROM    mytable t1, mytable t2
            WHERE   t2.reference = t1.non_unique_indexed_field
            )
    SET     value = other_value
    

    will fail.

    This is not an issue with SQL Server, though.

    One more thing: for a table like this,

    CREATE TABLE t_indexer (id INT NOT NULL PRIMARY KEY, uval INT NOT NULL, ival INT NOT NULL)
    CREATE UNIQUE INDEX ux_indexer_ux ON t_indexer (uval)
    CREATE INDEX ix_indexer_ux ON t_indexer (ival)
    

    , this query:

    /* Sorts on the non-unique index first */
    SELECT  TOP 1 *
    FROM    t_indexer
    ORDER BY
            ival, uval
    

    will use a TOP N SORT, while this one:

    /* Sorts on the unique index first */
    SELECT  TOP 1 *
    FROM    t_indexer
    ORDER BY
            uval, ival
    

    will use just an index scan.

    For the latter query, there is no point in additional sorting on ival, since uval are unique anyway, and the optimizer takes this into account.

    On sample data of 200,000 rows (id == uval == ival), the former query runs for 15 seconds, while the latter one is instant.

    Michael Haren : Is there a significant difference between hash joins and nested loop joins? It's not clear if you're suggesting that the distinction justifies one or the other.
    Quassnoi : For the query above, HASH JOIN's are more efficient on non-selective indexes, NESTED LOOP's are more efficient on selective ones. UNIQUE index is the most selective index ever, and the optimizer will take the index uniqueness into account when estimating selectivity and choosing the join algorithm.
    Michael Haren : Are you saying then that there's not a general answer (it depends heavily on the query)? Is there no easy answer to this?: if the index *could* be unique, should I make it unique or not?
    Quassnoi : Yes, if the index could be unique, you certainly should make it unique. There is no benefit from using non-UNIQUE index on intrinsically UNIQUE data. UNIQUE helps the SQL Server to understand that the data are really unique and optimize the algorithms.

capturing webform event for workflow on asp.net site

The basic idea is that I have a website and a workflow. I need to capture button clicks from aspx pages in my workflow.

I have a solution with a worflow project and a website project, and the web.config and global.asax have been set up to work with WF. Persistence services are also set up.

I have created a StateMachine workflow. There are several states (StateActivity) containing EventDrivenActivity instances, inside of which are HandleExternalEventActivity instances. To set up the latter correctly so the application could compile, I created an interface decorated with the ExternalDataExchange attribute, and exposing the necessary events. I then created a class that implemented this interface.

That's as far as I got. Now I need to connect the class to my aspx page; events on the page need to trigger the events in the class.

My code looks something like this:

<ExternalDataExchange()> _
Public Interface ICatWorkflow
            Property RequestId() As Guid
            ...
            Sub requestInfoEmail()
        ...
        Event onReception(ByVal sender As Object, ByVal e As ExternalDataEventArgs) 
End Interface

Class MyObject
   Implements ICatWorkflow
        Public Property RequestId() As Guid Implements ICatWorkflow.RequestId
            ...
        End Property
        Public Sub requestInfoEmail() Implements ICatWorkflow.onReception
            ...
        End Sub
        Event onReception(ByVal sender As Object, ByVal e As ExternalDataEventArgs)
end class

On my form.aspx âge, there is a button, and on form.aspx.vb page, there is a corresponding event handler:

Protected Sub btnReception_Click(ByVal sender As Object, ByVal e As System.EventArgs)             
      Handles btnReception.Click
        ...
End Sub

Where to go from here?

From stackoverflow
  • I presume you are running a workflow per user session. If so you need to store the workflow instanceiId somewhere you can get to it. So either put it in a cookie or in the Session object. I prefer the cookie because it works even when the session times out or the AppDomain is recycled by IIS.

    Next you need to get a reference to the ExternalDataExchange service. That is easy if you have a reference to the worklfow runtime. All you need is workflowRuntime.GetService(). Next you use the service to raise the event that sends the message to your workflow.