another technical blog...technically

Monday, July 2, 2018

Scheduler is locked?

Lately, lot of people talk about RPA but very few people talk about what happens when you have to scale and you have something like 20 robot working all together, synchronized, with a "time/resource filling" target.
Sometimes you could see the scheduled tasks forever "Pending", not responding to any delete or run requests.
The solution is: let's go to the DB and use this query to find out all the pending processes in a human readable way.

  SELECT 
       s.sessionid,
       r.name,
       p.name,
       s.startdatetime,
       st.statusid,
       st.description
  FROM 
  [BluePrism].[dbo].[BPASession] as s
  inner join BluePrism.dbo.BPAProcess as p
  on s.processid = p.processid
  inner join BluePrism.dbo.BPAResource as r 
  on r.resourceid = s.runningresourceid
  inner join BluePrism.dbo.BPAStatus as st
  on s.statusid = st.statusid
  
  where st.description = 'Pending'
  order by s.startdatetime

Then just change the end date from NULL to another value (maybe the same of the start one) and the statusid to "2", which stands for the Terminated status and the linked resource will be released ;)
Share:

Monday, June 25, 2018

Make your life a little easier with ODBC

I've written about the DLL import problem of BluePrism, of possible workaround with custom DLLs, but we can also make our life easier when we can.
For example in my last project i had to work with a lot of data sources as SQL Server, MySQL and Access all together.
All those above are data sources which have connectors for ODBC.
This means we can deal with all DBMS (and much more) simply installing ODBC connectors in the robot machine and using only this VBO to deal with that.
The key is to pass the driver info to the VBO so connection string could be composed.

So this are the VBO actions i created to deal with query.

Read in collection

The action you need the most.
It has two piece of code, because you could need to get lot of data (more than 300.000 rows).
Using the SqlCommand variable you can also do something more than "Select * from table" but you can also filter before getting the data.
The first code block is faster but will not work with lot of data, and the second one will be slower but it will work everytime.
Pay attention kiddo, BP passes parameter by value and not by reference, so you could need to do the "business work" in the VBO action, in order not to go "Out of memory".
OdbcConnection connection = new OdbcConnection(connectionString);
result = new DataTable();
exception = "";

try
{
 connection.Open();
 OdbcCommand command = new OdbcCommand(sqlCommand, connection);
 OdbcDataReader dataReader = command.ExecuteReader();
 result.Load(dataReader);
}
catch (Exception ex)
{ 
 exception = ex.Message;
}
finally
{
 connection.Close();
}

And this is the "big data" one.
OdbcConnection connection = new OdbcConnection(connectionString);
result = new DataTable();
exception = "";

try
{
 connection.Open();
 OdbcCommand command = new OdbcCommand(sqlCommand, connection);
 OdbcDataReader dataReader = command.ExecuteReader();

 DataTable resultSchema = dataReader.GetSchemaTable();
 List listCols = new List();
 if (resultSchema != null)
 {
  foreach (DataRow drow in resultSchema.Rows)
  {
   string columnName = System.Convert.ToString(drow["ColumnName"]);
   DataColumn column = new DataColumn(columnName, (Type)(drow["DataType"]));
   column.Unique = (bool)drow["IsUnique"];
   column.AllowDBNull = (bool)drow["AllowDBNull"];
   column.AutoIncrement = (bool)drow["IsAutoIncrement"];
   listCols.Add(column);
   result.Columns.Add(column);
  }
 }

 DataRow dataRow = null;
 while (dataReader.Read())
 {
  dataRow = result.NewRow();
  for (int i = 0; i < listCols.Count; i++)
  {
   dataRow[((DataColumn)listCols[i])] = dataReader[i];
  }
  result.Rows.Add(dataRow);
 }
}
catch (Exception ex)
{
 exception = ex.Message;
}
finally
{
 connection.Close();
 System.GC.Collect();
}

 Execute Command 

Just executes the query
OdbcConnection connection = new OdbcConnection(connectionString);
exception = "";

try
{
 connection.Open();
 OdbcCommand command = new OdbcCommand(sqlCommand, connection);
 command.ExecuteNonQuery();
}
catch (Exception ex)
{ 
 exception = ex.Message;
}
finally
{
 connection.Close();
}

 Execute Commands in a transaction

Executes more queries in a single transaction

OdbcConnection connection = new OdbcConnection(connectionString);
OdbcTransaction transaction = null;
exception = "";

try
{
 connection.Open();
 transaction = connection.BeginTransaction();
 OdbcCommand command = null;
 
 foreach (DataRow dr in sqlCommands.Rows)
 {
  string sqlCommand = dr["Command"].ToString();
  command = new OdbcCommand(sqlCommand, connection, transaction);
  command.ExecuteNonQuery();
 }
 
 transaction.Commit();
}
catch (Exception ex)
{ 
 transaction.Rollback();
 exception = ex.Message;
}
finally
{
 connection.Close();
}


A good strategy could be create CRUD action for ODBC driver in order to implement implicitily the repository pattern.
Changing the ConnectionString will just change the DataSource type and help creating a "universal" repository.
And... that's it for now!

Monday, June 18, 2018

Import-SPWeb preserves ListItemId?

If you're reading this you have my same problem, you don't have an answers to this question, and maybe you need to move a list across site across site collections/web applications/farms without changing the list item id.
Someone says yes, others say no. The answer is that both answers are true, it depends on the preconditions.

Let's start from this article which explains just a little bit how the ContentDB works.
So, let's create a custom list named "Another custom list" and let's populate it with a bunch of items.
Yeah just 3
So let's look what happens in the ContentDB.
At first let's look at the list id, and we can find it with just a select on the AllLists table querying by title.
Aggiungi didascalia
With the list id in your hands you can find all the entry of the items in AllUserData table, which is the repository of all the ContentDB Items (you can also have a look to the composed primary key of the table from the previous screenshot).

Remember something?
Now, let's delete item 1 and create item 4 and 5 and let's see what happends (Please note that Item1 is still in the content db until i delete it from site collection recycle bin). Moreover in the AllListAux i can now see the counter used to give the list item id to the object.
So this is what happens behind the scenes


Mmm.. Aux table
It's now time to export the list and take a look to what SharePoint made, so unzipping the cmp file, i can read this in the Manifest.xml
You can see that every item is serialized in xml, and surpise surprise there is an attribute called IntId, so let's see what happens importing this cmp file in another site collection:
Import-SPWeb http://sp2013 -Path "C:\TEMP\anothercustomlist.cmp" -IncludeUserSecurity -UpdateVersions Overwrite
and the result is that i've now a new list with a new id, but same informations in the AllListAux and AllUserData.
Only GUIDs changed in the import
and only GUIDs
So, even if i'm not showing you the AllUserData table, also ListItem IDs remains the same, so the answer to our question seems to be yes.
But what if i try to reimport? Well, items are not overwritten and i can see that same object are replicated with different list item id which starts from the value in the aux table.
So the answer is no if you are importing in a preexisting list with other items, because you never know what could be the next item id.

Below you can find some code i used to do a final test in order to get a lot of item from a list with more than 6000 items (which is not the one of this example) from the source list and the destination one, in order to do a final compare which led me to understand that Import-SPWeb is something good :)
string siteUrl = "http://sp2013/sites/test";
string listTitle = "Test";
string user = "Administrator";
string domain = "DEV";
string password = "Password";

using (ClientContext clientContext = new ClientContext(siteUrl))
{
 clientContext.Credentials = new NetworkCredential(user, password, domain);

 List list = clientContext.Web.Lists.GetByTitle(listTitle);
 clientContext.Load(list);
 clientContext.ExecuteQuery();

 ListItemCollectionPosition itemPosition = null;
 while (true)
 {
  CamlQuery camlQuery = new CamlQuery();
  camlQuery.ListItemCollectionPosition = itemPosition;
  camlQuery.ViewXml = @"
  
   
   
  
  1000
   ";

  ListItemCollection listItems = list.GetItems(camlQuery);
  clientContext.Load(listItems);
  clientContext.ExecuteQuery();

  itemPosition = listItems.ListItemCollectionPosition;

  foreach (ListItem listItem in listItems)
  {
   Console.WriteLine("Item Title: {0}", listItem["Title"]);
  }


  if (itemPosition == null)
   break;

  Console.WriteLine(itemPosition.PagingInfo);
  Console.WriteLine();
 }
}

Monday, June 11, 2018

BluePrism: drill down about housekeeping BP DB

As you already know, it's so easy to finish space on a DB server when you use BP.
The reason could be, a poor design of logging (e.g log everything) or maybe logging in a loop.
It's clear that you don't need to log everything, but only what you have effectively to justify if you make a mess and also log errors for sure.
Now you are thinking: what the hell I've done? Why i was so stupid to log everything?

Well, you are in good company, I did the same. But now it's time to fix everything.
Now you are thinking that you can use Archiving feature to store the session log somewhere but what if archiving doesn't work, or mabe you just want to erase the unneeded data.

Always remember that delete a lot of data does not mean only wasting a lot of time, but also to see the transaction log growing a lot, so try this only when you are sure you can backup the DB.
It's clear that there is always the risk of corrupting the DB when you do something at the core level of a product.

So let's begins, the first thing you can do it's to delete the content in BPAScheduleLog and BPAScheduleLogEntry. The scheduler log grows perpetually so could be a good idea to trucate BPAScheduleLogEntry and delete BPAScheduleLog, this is what BP says, they also provide you a script to delete all data in a particular time frame, but this is another story.

The other massively filled table is BPASessionLog_NonUnicode, and here BP propose a script which helps you delete all entries, but in our case we want to selectively delete only the log entry we need (maybe log about a specific Business Object or Process Page).

BP said it could work so, before applying on a real DB, let's test it up.
Let's create the base case, so I created 2 processes:
  • Test.Log1
    • Page1: calls Test.VBO.Log1 actions
    • Page2: calls Test.VBO.Log2 actions
  • Test.Log2
    • Page1: calls Test.VBO.Log1 actions
    • Page2: calls Test.VBO.Log2 actions
and 2 VBOs
  • Test.VBO.Log1
    • Action1: returns 2+2
    • Action2: returns 2*2
  • Test.VBO.Log2
    • Action1: returns 2-2
    • Action2: returns 2/2
Log is enabled everywhere, and i made every process run 2 times on the same machine.


 Here i entered in the DB and i discovered with this simple query what combination of:
  • Process name
  • Process page name
  • VBO 
  • VBO action
is logged the most
SELECT processname,pagename,objectname,actionname,count(*) as num
FROM [BluePrism].[dbo].[BPASessionLog_NonUnicode]
group by processname,pagename,objectname,actionname order by num desc
And here... only the brave... what happens if i selectively delete rows here?
The answer is: absolutely nothing, i posted some screenshot below indeed that makes you understand the only pitfall i noticed on my local Blue Prism installation.
So, have fun cleaning your DB from unnecessary rows.
See ya.

Monday, June 4, 2018

A Blue Prism project with custom DLLs: load DLL from everywhere

I admit, this is one of the most painful points when i work with Blue Prism. As explained in other posts about DLLs, we have to put class libraries into the root folder of Blue Prism in order to use classes.
This could be a problem when you don't have the rights to access Blue Prism root folder, so with the great Antonio Durante, we find a way to solve this problem using another forgotten beauty: the reflection.

Here you can download a little project in Visual Studio, which is composed of some lines of code which simply get the content of a editable PDF and return as a DataTable containing as many rows as lines of text found in the PDF.
This piece of code uses iTextSharp.dll to do the work so we have a depencency here.


public class Program
{
 public Program(string blablabla)
 {
  // This is a fake constructor
 }

 public static DataTable LoadPdf(string path)
 {
  List data = PdfHelper.ExtractTextFromInvoice(path);

  DataTable table = new DataTable();
  table.Columns.Add("Row", typeof(string));

  DataRow row11 = table.NewRow();
  row11["Row"] = "Yeah 1.1 version as well";
  table.Rows.Add(row11);

  foreach (string item in data)
  {
   DataRow row = table.NewRow();
   row["Row"] = item;
   table.Rows.Add(row);
  }

  return table;
 }

 static void Main(string[] args)
 {
    DataTable t = LoadPdf(@"C:\Users\v.valrosso\Downloads\test1.pdf");
 }
}

The code here is very simple, but just to see again how reflection work, i write also this code to test everything. As you can see i load only the assembly in a folder where you can find also the referenced DLL (i've used directly the Debug output folder of the dummy project)

class Program
{
 static void Main(string[] args)
 {
  string assemblyPath = @"C:\TFS\Test\Code\Poc.ReferencingCode\bin\Debug\Poc.ReferencingCode.dll";

  Assembly asm = Assembly.LoadFrom(assemblyPath);
  //Assembly asm = Assembly.Load(File.ReadAllBytes(assemblyPath));
  Type t = asm.GetType("Poc.ReferencingCode.Program");

  var methodInfo = t.GetMethod("LoadPdf", new Type[] { typeof(string) });
  if (methodInfo == null)
  {
   // never throw generic Exception - replace this with some other exception type
   throw new Exception("No such method exists.");
  }

  object o = Activator.CreateInstance(t);

  object[] parameters = new object[1];
  parameters[0] = @"C:\Users\v.valrosso\Downloads\test1.pdf";       

  DataTable r = (DataTable)methodInfo.Invoke(o, parameters);
  Console.WriteLine(r);
 }
}

After that we can just play with BP creating a new VBO just pasting the code and ... les jeux sont faits.


Just  a warning, BP locks the DLL so you have think something more smart (Antonio and I have developed something very very smart but sadly i cannot show you because of it's top secret).

As usual thanks to my colleague and friend Antonio Durante for the wonderful work we make together everyday.

Monday, May 28, 2018

Why in the world i cannot get deferred item?

I was wondering why i could not get deferred items from a queue, so i drilled down just a little bit the BP Database in order to see all deferred items

Just watching the queue
So i dumped data before and after completion of the object just to understand how the DB row is updated

Old VS New
After that, we made a little action that inject SQL code from BP in order to get deferred items.

select
       *
       from
       BPAWorkQueue as q join BPAWorkQueueItem as i
       on q.id = i.queueid
       where q.name = 'QueueName'
       and i.deferred IS NOT NULL
       and i.completed IS NULL
       and i.exception IS NULL

And this is how Claudia Musio (thanks for everything) and me, solved the problem.

Monday, May 21, 2018

The KeepAlive problem

Sometimes, you just cannot ask to the IT department to disable screensaver or user lock screen, and personally i never understood why if we are dealing with virtualized machines, so you have to solve the problem somehow.

You can rely for sure on the LoginAgent VBO, which helps you to understand if the screen is locked and so you can login again, but if the screen is not locked, you can just move the mouse on the screen so the session remains alive.

I've tried some methods but only this worked, so in the global code you have to insert this piece of code


Private Shared Sub mouse_event(ByVal dwFlags As Integer, ByVal dx As Integer, ByVal dy As Integer, ByVal dwData As Integer, ByVal dwExtraInfo As Integer)
End Sub

Private Const MOUSEEVENTF_MOVE As Integer = 1
Private Const MOUSEEVENTF_LEFTDOWN As Integer = 2
Private Const MOUSEEVENTF_LEFTUP As Integer = 4
Private Const MOUSEEVENTF_RIGHTDOWN As Integer = 8
Private Const MOUSEEVENTF_RIGHTUP As Integer = 16
Private Const MOUSEEVENTF_MIDDLEDOWN As Integer = 32
Private Const MOUSEEVENTF_MIDDLEUP As Integer = 64
Private Const MOUSEEVENTF_ABSOLUTE As Integer = 32768

Public Shared Sub Move(ByVal xDelta As Integer, ByVal yDelta As Integer)
    mouse_event(MOUSEEVENTF_MOVE, xDelta, yDelta, 0, 0)
End Sub

After that yuo can create an action (eg. MouseMove) which randomly moves the mouse into the screen

Dim Randomizer As System.Random = New System.Random()
Dim x As Integer = Randomizer.Next(1, 640)
Dim y As Integer = Randomizer.Next(1, 480)

Move(x, y)

And that's it, another problem solved by Varro e Antonio Durante
Share:

Monday, May 14, 2018

Use BP Tags to do something more

What about tags?
I think tags can be used  to do more than simply tag an item to understand what kind of item the robot is working on.
Indeed we thank about a new way to use it, just replicating the .NET Dictionary ToString().

You can represent a dictionary as a set of key-value like this:

Key1:Value1;Key2:Value2;Key3:Value3;

And yeah using just a bit of code, you play with tags.
Look at this diagram, here we play with tags using a custom code


Dim patternTag As New Regex("(" + tagkey + ":)+([^;]){0,}")
Dim patternKey As New Regex("(" + tagkey + ":)+")
Dim match As Match
Dim value As String

match = patternTag.Match(tagString)

If match.Success Then
 valueTag = match.Value.TrimEnd(";")
 value = patternKey.Replace(valueTag, "")
 valueTag2 = valueTag.Replace(value, newValue)
 success = True
Else
 valueTag2 = tagKey + ":" + newValue
 success = False
End If

At first we get all data from tags from a particular item in the queue, and we look for all values of type Key1:whatever (note that, in our case is Key:Value and for BP is just one tag), so we delete pre-existing Key1:Value1 and we add Key1:NewValue.
It's clear that we assume that every tag of type Key-i:value is considered singleton.

With this configuration we can do interesting things, for example track technical statuses.
More generally, when you work with a process (expecially if attended), you have different phases, which can be matched with BP Item statuses.
In a singular phase, you can do lot of operations, and if the phase failed, it's normal to repeat the phase.
Sometimes you cannot redo some operations (maybe the robot is paying someone) and you don't want to split the original phase more phases (so not using the status) because of business rules.
So, assuming we are in Status S1, you do the Operation1 on a particular system and after that the robot crashes, when the robot retry the process, you can design the process so, if in state S1, the step Operation1 is skipped.

As usual, thanks to Antonio Durante and Raffaele Ecravino :-)

Monday, May 7, 2018

Scheduling processes and unregistered named pipe

Surprise surprise, a customer asked to start a robot at 7:00 AM, but if you generally work from 9AM to 9 PM like me, you'll find that start a robot at 7:00AM is... how can i say... not so beautiful.
So It's time to make the scheduler to work!

Let's assume the Resource PC we are using is configured to have the LoginAgent service listening on port 8081, and Resource PC ready to fire listening on port 8082.
When the LoginAgent is up and running, the Resource PC is not ready to start an automation, so we have to schedule this sequence of tasks:
  1. Login on 8081
  2. Run on 8082
  3. Logout on 8082
Da first configuration
It's also true that between 1 and 2, everything can happen, and you have to rely on resilience setting, but also, what can happen is that the Login fails because of unregistered named pipe.

Just the login agent log

I can see you asking yourself: WTF is named pipe?
Well, I read a lot about it but I don't remember a single word, but the concept is that the named pipe is a method for IPC (Inter Process Comunication), so i can assume is used when to resume LoginAgent.

Indeed, referring to the schedule above, if you observe the control room is that, when you see the resource PC connected on port 8081, the port 8082 results to be disconnected and vice versa, this means that LoginAgent is listening for login request while 8081 is connected and, until 8082 is in use (so until the logout), LoginAgent is not available.
Sometimes it happens that, even if LoginAgent service is available on port 8081, but the login failed because of the unregistered named pipe, so the solution is... surprise surprise...reboot the machine.
You can find all registered named pipes just with this PowerShell command
[System.IO.Directory]::GetFiles("\\.\\pipe\\")

And the output will be something like this


So the next action is to translate this code in a VBO action, so you can discover if the BP named pipe is registered or not, you can simply use this code to get all the named pipe and then just filter the output collectin

Dim listOfPipes As String() = System.IO.Directory.GetFiles("\\.\pipe\")
dt.Columns.Add("C1")
For Each s As String In listOfPipes
    dt.Rows.Add(s)
Next

Use VBO... use the force
Just the BP log after the cure

The next step is to integrate this new logic in the login task, so if the robot finds out that named pipe is not registered, it reboots the Resource PC.

And now login like this

.The final step is just to add a new task 2.Wait, this task will be:
  • Called if login failed
  • Calls login both in success and failure condition

A new day, a new design
With this little update, il login fails (and this is why i raise an exception after machine reboot), the wait process (it could be whatever, maybe also a void one) will use the resilience of BP to keep the scheduled task alive.
The pitfall of this technique is that you can generate a infinite loop, so take care.

So my friends, that's it, and thanks to my friend Antonio Durante for his time which helped us to work this out.

Monday, April 30, 2018

Just another out-of-date but maybe useful trick with KnockoutJS

I know what you're thinking about: another post on KnockoutJS? Again?
To be extremely clear, i 've written this post a lot of months ago but i never completed and published, and really i could not just delete it.

So today i will show you just a little example of comunicating widgets: it's not something complicated, but i think it can be also reproduced in other JS framwork.
Basically, let's assume we have a HTML which contains 2 widgets, the first one is auto-consistent and reusable, while the second one depends on the previous (to be honest this is a derivation from the real world example, when i have 4 widget comunicating one with each other but...).

So we have the code of the first controller, which is the child one.
window.Dummy = window.Dummy || {};
window.Dummy.Custom = window.Dummy.Custom || {};
window.Dummy.Custom.View = window.Dummy.Custom.View || {};
window.Dummy.Custom.View._Child = window.Dummy.Custom.View._Child || {};

(function (controller, utility, api, $, ko) {
    // View model definition
    controller.ViewModel = function viewModel() {
        var vm = this;
        vm.Items = ko.observableArray();

        // Search in all view model items
        vm.Get = function (id) {
            //identify the first matching item by name
            vm.firstMatch = ko.computed(function () {
                var search = id.toLowerCase();
                if (!search) {
                    return null;
                } else {
                    return ko.utils.arrayFirst(vm.Items(), function (item) {
                        return ko.utils.stringStartsWith(item.Id, search);
                    });
                }
            }, vm);

            return vm.firstMatch();
        };

 ...
    };

    // Controller definition
    controller.Loaded = ko.observable(false);
    controller.LoadData = function (data) {
        controller.Vm.Items.push.apply(controller.Vm.Items.sort(), data);

        ...
    };
    controller.Vm = new controller.ViewModel();
    controller.Wrapper = document.getElementById("workout-definitions");

    // Controller initialization
    controller.Init = function () {
        ko.applyBindings(controller.Vm, controller.Wrapper);
        api.GetAll(controller.LoadData);
    };

}(window.Dummy.Custom.View._Child = window.Dummy.Custom.View._Child || {},
    Dummy.Custom.Utility,
    Dummy.Custom.Api._ChildItems,
    jQuery,
    ko));

The controller is capable of calling the API exposed for the objects of this widget so it's possible somelike indipendent from the context. The instance of the view model exposes the Get Method, which searches into the array of objects loaded from API and return a single object.
Here there is instead the code of the parent widget:

window.Dummy = window.Dummy || {};
window.Dummy.Custom = window.Dummy.Custom || {};
window.Dummy.Custom.View = window.Dummy.Custom.View || {};
window.Dummy.Custom.View.Parent = window.Dummy.Custom.View.Parent || {};

(function (controller, 
 utility, 
 $, 
 ko,
    parentApi,
 childView) {

    // View model definition
    controller.ViewModel = function viewModel() {
        var vm = this;
  ...
    };

    // Controller definition
    ...

    // Controller init
    controller.Vm = new controller.ViewModel();
    controller.Init = function (data) {
        controller.LoadUserData(data);

        // Await widgets readyness and then fire method  
        // Load events if athlete is new
        childView.Loaded.subscribe(function (newValue) {
            if (newValue === true) {
    ...
            }
        });
    };
 
    ...
}(window.Dummy.Custom.View.Parent = window.Dummy.Custom.View.Parent || {},
    Dummy.Custom.Utility,
    jQuery,
    ko,
    Dummy.Custom.Api.ParentItems,
    Dummy.Custom.View._Child,
));

As you can imagine i dropped most of the code just to show you the concept. The parent widget is dependant of Child controller, which could be started calling Child.controller.Init everywhere (maybe on document loaded event), the parent is only subscribed to the Child.Loaded variable.
This means that, when the child finished to load data, Parent is triggered to do something else, and this could be someway useful.
It's clear you can use the model with more than one widget, with more than one triggered event, and you have to take care about the data you load, because you will use every widget as a item container, rather than just a graphical item.
Hope you find it useful even if outdated, but i preferred to share this blog post because maybe you can replicate the model using other JS frameworks that support the event subscription technique.
Share:

Monday, April 23, 2018

Another HOWTO about media center on Raspberry pi3 (Part 2/2)

I promised this article would be just a little bit more interesting, indeed i will share the bash scripts i did to manage some activities.

sendip.sh

I've created a simple script which send me an email when the IP changes, or it send me the IP by default at midnight. At first i subscribed to a service called smtp2go, installed ssmtp and configured ssmtp like this:
sudo apt-get install ssmtp mailutils 
sudo nano /etc/ssmtp/ssmtp.conf 

rewriteDomain=smtp2go_ChosenDomain
AuthUser=smtp2go_AccountUsername
AuthPass=smtp2go_AccountPassword
mailhub=mail.smtp2go.com:2525 
UseSTARTTLS=YES 
FromLineOverride=YES 

After that i wrote down those lines:
#!/bin/bash
# Just a script to send me an email with my IP
# Use "sendip" to execute the command and "sendip force" to force email send

# Const
readonly LAST_IP_FILEPATH="/home/pi/scripts/lastIp"
readonly MAIL_RECIPIENT="myemail@email.com"

# Main
CURRENT_IP=$( curl ipinfo.io/ip )
LAST_IP=""

# If 'force' delete IP file content
if [ "$1" = "force" ] || [ ! -e $LAST_IP_FILEPATH ]
then
    echo "[INFO] Creating new file containing IP"
    echo "" > $LAST_IP_FILEPATH
    echo $CURRENT_IP > $LAST_IP_FILEPATH
    echo "[INFO] Sending email containing IP"
    echo "$CURRENT_IP" | mail -s "IP" $MAIL_RECIPIENT
else
    echo "[INFO] File found, getting last ip from file"
    LAST_IP=$( cat $LAST_IP_FILEPATH )
    if [ "$LAST_IP" = "$CURRENT_IP" ]
    then
        echo "[INFO] IP not changed since the last poll, no need to send an email"
    else
        echo "[INFO] Whoah! ip changed, i need to send you the new one"
        echo $CURRENT_IP > $LAST_IP_FILEPATH
        echo "$CURRENT_IP" | mail -s "IP Changed" $MAIL_RECIPIENT
    fi
fi

After i make the scipt executable as a bash command
path="/home/pi/scripts/sendip.sh"
sudo ln -sfT "$path" /usr/local/bin/sendip
chmod +x "$path"

Then, finally, i register the command in crontab, paying attention to change the first row like this. The sendip command will try to understand if the IP changed last time, if yes, it will send you an email with the new public IP.

convert.sh

The other script I created helps you converting media files, so if you have something the media player can't play, you can use the script to launch a media conversion

#!/bin/bash
# Just a wrapper to avconv with my preferred settings


# Const
readonly INPUT_DEFAULT_DIR="/media/Vault/Download/2Convert/"
readonly OUTPUT_DEFAULT_DIR="/media/Vault/Download/"
readonly MAIL_RECIPIENT="youremailaddress@email.com"
readonly MAIL_SUBJECT="LittleBox: File converted"


# Function
sendMail(){
 endEpoch="$(date +%s)"
 
 # Compute the difference in dates in seconds
 tDiff="$(($endEpoch-$startEpoch))"
 # Compute the approximate minute difference
 mDiff="$(($tDiff/60))"
 # Compute the approximate hour difference
 hDiff="$(($tDiff/3600))"
  
 message=""
 if [ $mDiff -gt 59 ]
 then
  message="File $inputFile processed in approx $hDiff hours"
 else
  message="File $inputFile processed in approx $mDiff minutes"
 fi
 
    echo $message | mail -s "$MAIL_SUBJECT" $MAIL_RECIPIENT
}

executeFileConversion() {
 inputFile=$1
 outputDirectory=$2
 startEpoch="$(date +%s)"

 # Get filename and create output file
 filename=$(basename "$inputFile")
 extension="${filename##*.}"
 filename="${filename%.*}"
 outputFile="$outputDirectory$filename.mkv"
 echo "[INFO] Output file will be: $outputFile"
 
 cmd="avconv -i '$1' -c:v libx264 -preset medium -tune film -c:a copy '$outputFile' -y"
 echo "[INFO] Conversion command will be: $cmd"
 eval $cmd
 sendMail $inputFile $startEpoch
}

executeFileConversionDefault() {
 IFS=$'\n'
 files=( $(find $INPUT_DEFAULT_DIR -type f) )
 for i in "${!files[@]}"; do 
  echo "[INFO] Executing conversion of '${files[$i]}'"
  executeFileConversion "${files[$i]}" "$OUTPUT_DEFAULT_DIR"
 done
}


# Main
if [[ $# -eq 0 ]] ; then
    echo "[INFO] No parameter specified, all file in default dir will be processed"
 executeFileConversionDefault
elif [[ $# -eq 2 ]] ; then
 executeFileConversion "$1" "$2"
fi

p2p.sh

I used the last script to shutdown the p2p application when i saw the were decreasing the pi2 performances. The pi3 does not suffer anymore multithreading because it has more firepower, but maybe it could be useful to some of you
#!/bin/bash
# Just a script to start/stop p2p services
# Use "p2p start" to start all registered services and "p2p" stop to shutdown

# Const
startCmd=( )
# Amule
startCmd[0]="sudo /etc/init.d/amule-daemon start"
# Transmission
startCmd[1]="sudo service transmission-daemon start"

stopCmd=( ) 
# Amule
stopCmd[0]="sudo /etc/init.d/amule-daemon stop"
# Transmission
stopCmd[1]="sudo service transmission-daemon stop"


# Functions
execCmd(){
 declare -a argArray=("${!1}")
 for i in "${!argArray[@]}"; do 
  echo "[INFO] Executing command ${argArray[$i]}"
  eval ${argArray[$i]}
 done
 
}


# Main
case $1 in
 "start" )
  echo "[INFO] Starting all registered services"
  execCmd startCmd[@]
  ;;
 "stop" )
  echo "[INFO] Stopping all registered services"
  execCmd stopCmd[@]
  ;;
esac

I think all the script are quite self-explanating and i hope you find it useful. That's all
Share:

Monday, April 16, 2018

Another HOWTO about media center on Raspberry pi3 (Part 1/2)

Hi guys, it's a long time i don't touch my raspberry pi 2 media server (from now LittleBox). So with the raspberry pi3 release, i decided to do a little upgrade and creating the new LittleBox. which is the same as the old one, but with pi3, so more powerful.
Because of i have lost all the old scripts during SD formatting i decided to rewrite them and to share everything with you :-).
For the old version i decided to use just the command line version of Raspbian, so i controlled it using PuTTY sessions from my own pc (just like the old-fashioned way) this time i noticed the default version is the one with UI so ... why not? and this also has driven me to change some of the installed softwares.

Goals

My hardware configuration is about a Raspberry pi3, with a little fan, an attached HDD of 2TB formatted in NTFS and a ethernet connection. My goal is to create a small PC based on latest RaspBian installation that acts as a:
  • Media center 
  • Home Backup NAS 
  • Download station
So i've installed the following:
  • aMule: old but good... maybe 
  • avconv: useful for media conversion 
  • Plex: THE media server 
  • Transmission: just a torrent daemon 
  • VLC & AV codec: you never know 
  • Dos2Unix: sometimes used when i edit some files from Windows PC 
  • Fail2Ban: useful if you expose your little server on the internet 
  • MailUtils: utilities to send email, useful to send some mails directly to me 
  • Monit: useful to monitor some services 
  • NTFS-3G: drivers for NTFS filesystem
  • SMB Server: the best way to share files between a UNIX like system and a Windows one 

HDD Install

Let's create a folder to mount the HDD
sudo mkdir /media/Vault
sudo chmod 777 /media/Vault 
then install the NTFS drivers:
sudo apt-get install ntfs-3g
then edit the fstab file
sudo nano /etc/fstab
and add the following lines
# Custom
/dev/sda1    /media/Vault   ntfs-3g   rw,default   0   0 
Now if you reboot, the HDD will results mounted at /media/Vault

Setup SMB Sharing 

Let's now setup the SMB share, at first let's install the package
sudo apt-get install samba samba-common-bin
sudo apt-get install cifs-utils
Then let's edit the smb.conf adding the following lines
sudo nano /etc/samba/smb.conf
wins support = yes [pi] 
   comment= Pi Home 
   path=/home/pi 
   browseable=Yes 
   writeable=Yes 
   only guest=no 
   create mask=0777 
   directory mask=0777 
   public=no 

[Vault] 
   comment= Vault 
   path=/media/Vault 
   browseable=Yes 
   writeable=Yes 
   only guest=no 
   create mask=0777 
   directory mask=0777 
   public=no 
after that, you need to change the SMB password for pi user:
sudo smbpasswd -a pi 
Now you'll be able to access the pi home and the external HDD with a Windows PC.

InstAll

Be sure you have enabled SSH and VNC.
Now it's time to install aMule and Transmission and configure them to be accessible from the web
In this script i download the amule daemon and i get an encrypted version of the chosen password i will set up for the user who login to the web aMule server.
sudo apt-get install amule-daemon amule-utils 
amuled –f  
amuleweb -w 
echo -n YourPreferredPassword | md5sum | cut -d ' ' -f 1 
dc9dc28b924dc716069dc60fbdcbdc30 

nano /home/pi/.aMule/amule.conf  
Here the rows of the file i want to edit, note that i use the external HDD to store Temp files and incoming file cause i want to reduce as much as i can the write operations on the SD card:
[eMule] 
AddServerListFromServer=1 
AddServerListFromClient=1 
SafeServerConnect=1 
...
TempDir=/media/Vault/Download/Temp 
IncomingDir=/media/Vault/Download 
...

[ExternalConnect] 
AcceptExternalConnections=1 
ECAddress=127.0.0.1 
ECPort=4712 
ECPassword=dc9dc28b924dc716069dc60fbdcbdc30 

[WebServer] 
Enabled=1 
Password=dc9dc28b924dc716069dc60fbdcbdc30 
PasswordLow=dc9dc28b924dc716069dc60fbdcbdc30 
...
After that we just need to change the default amule user who is pi:
sudo nano /etc/default/amule-daemon 
AMULED_USER="pi" 
Now aMule will be available at port 4711 via browser, to make it available as soon as the server is reeboted, we can use crontab, so:
crontab -e 
#Amule 
@reboot amuled -f 
It's time to install Trasmission daemon and setup some settings so:
sudo apt-get install  transmission-daemon 
sudo nano /etc/transmission-daemon/settings.json 
Here the configuration i use, i think they are really self-descriptive:
"blocklist-enabled": true, 
"blocklist-url": "http://john.bitsurge.net/public/biglist.p2p.gz", 
"download-dir": "/media/Vault/Download" 
"incomplete-dir": "/media/Vault/Download/Temp" 
"incomplete-dir-enabled": true 
"peer-port-random-on-start": false, 
"port-forwarding-enabled": true, 
rpc-password: YourPreferredPassword, 
rpc-username: pi,  
rpc-whitelist: *.*.*.* 
sudo /etc/init.d/transmission-daemon reload 
sudo /etc/init.d/transmission-daemon restart 
Now let's install Plex Media Server, using a custom repository from dev2day
sudo apt-get update && sudo apt-get install apt-transport-https -y --force-yes 
wget -O - https://dev2day.de/pms/dev2day-pms.gpg.key | sudo apt-key add - 
echo "deb https://dev2day.de/pms/ jessie main" | sudo tee /etc/apt/sources.list.d/pms.list 
sudo apt-get update 
sudo apt-get install plexmediaserver -y 
sudo apt-get install libexpat1 -y 
sudo apt-get install mkvtoolnix -y 

sudo service plexmediaserver restart 

sudo nano /etc/default/plexmediaserver 
PLEX_MEDIA_SERVER_TMPDIR=/media/Vault/Download/Temp 
PLEX_MEDIA_SERVER_USER=pi 
sudo chown pi /var/lib/plexmediaserver/ 
and now we can install al the other software stated before
sudo apt-get install libav-tools libavcodec-extra vlc dos2unix ufw fail2ban
Now it's time for Monit which will help us to quickly understand what's going on on our little server
sudo apt-get install monit 
sudo nano /etc/monit/monitrc
set httpd port 2812 address 0.0.0.0 
   allow 0.0.0.0/0.0.0.0 
   allow pi:YourPreferredPassword 

check process aMule matching "amuled" 
   start program = "/etc/init.d/amule-daemon start" 
   stop program = "/etc/init.d/amule-daemon stop" 
   if failed host 127.0.0.1 port 4711 then restart 

check process Plex with pidfile "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/plexmediaserver.pid" 
    start program = "/etc/init.d/plexmediaserver start" 
    stop  program = "/etc/init.d/plexmediaserver stop" 
    if failed port 32400 type tcp then restart 
    if 3 restarts within 5 cycles then alert 

check process SSHd 
    with pidfile "/var/run/sshd.pid" 
    start program = "/etc/init.d/sshd start" 
    stop program = "/etc/init.d/sshd stop" 
    if 3 restarts within 3 cycles then alert 
    if failed port 22 protocol ssh then restart 

check process Transmission matching "transmission-daemon" 
    start program = "/etc/init.d/transmission-daemon start" 
    stop program  = "/etc/init.d/transmission-daemon stop" 
    if failed host 127.0.0.1 port 9091 type TCP for 2 cycles then restart 
    if 2 restarts within 3 cycles then unmonitor 
So now everything is lock and load but in the second part, that i promise, it will be more interesting, "i'll introduce some custom scripts that will help you to manage your personal littlebox, stay tuned

Monday, April 9, 2018

Update Content Type in sandbox solution: a forgotten beauty

Like i said in previuos blog posts, i'm using a lot BP with SP on premise or O365, in order to implement a centralized approach to attended RPA.
As we already know, nowaday the most used approach is to use PnP scripts and this is something i really like, but i also have to deal with a great team with really good skills in RPA and IT in general, but i could not waste my time explaining how to use another tool, because of we need to just to set up some SP web sites with old-fashioned custom lists.
So i explained them something about sandbox solution, but surprise surprise, customer enjoyed a lot the POC and asked us lot of CRs, including adding some JavaScript to the custom form and (nooooo!) adding fields to content types.
With Francesco Cruciani (thx man), I figured out how to solve the problem, simply attacking the feature manifest.
The solution is really simple and you can download it simply clicking here.
As you can see we have:
  • Some fields
  • 1 CT
  • 1 List definition
  • 1 Feature 
Solution structure
Installing the sandbox you will just have to provision manually the list instance in order to be ready to use it.
Now, let's start with the update:
  1. Let's add a Field file: we called it Fields_Update
  2. Then update the CT and List Definition: we like order ;-)
The key is now simply here:
Only visible difference is Fields_Update
We just add the new module in the feature
Now let's focus on Test.SandBox.DataStructure.Template.xml

  
    
      
        
      
      
    
  

As you can see we have just applied the new manifest, with an explicit reference to the action of adding field to a content type and then pushing down the update, so also the content type instances will be updated: you just have to upload again the wsp with a different name, and upgrade the solution from SandBox solution menu and that's all.

No way if you want to change order of Field in the mask or change data type, we have not investigated further.
This post helps you only to remember that sometimes, old fashioned way could be useful to make your life a little easier.

Monday, April 2, 2018

A Blue Prism project with custom DLLs: 4 dummies

It seems this blog post was found really interesting from lot of you, but i also read a lot of comments about how can you set up a project and use DLL with BP, so i will show you a practical and really simple example.
Here below the code of the DLL i will use, just two classes:
  1. Log Helper: write something in the event viewer (could be useful for bug hunting)
  2. Program: the typical entry point for every .NET console application
LogHelper.cs
using System;
using System.Diagnostics;

namespace BP.ExternalDll.Example
{
    public class LogHelper
    {
        private readonly string _log;
        private readonly string _source;

        public LogHelper(string source, string log)
        {
            _source = source;
            _log = log;
        }
        
        public void LogWarning(string message)
        {
            try
            {
                CreateEventSource();
                EventLog.WriteEntry(_source, message, EventLogEntryType.Warning);
            }
            catch (Exception)
            {
                // ignored
                // If you're here, it means you cannot write on event registry
            }
        }
        
        private void CreateEventSource()
        {
            if (!EventLog.SourceExists(_source))
            {
                EventLog.CreateEventSource(_source, _log);
            }
        }
    }
}
Program.cs
namespace BP.ExternalDll.Example
{
    public class Program
    {
        public static void Log()
        {
            string currentDirectory = System.Environment.CurrentDirectory;
            LogHelper _helper = new LogHelper("BP.ExternalDll.Example", "Test library");
            _helper.LogWarning(currentDirectory);
        }

        static void Main(string[] args)
        {
            Log();
        }
    }
}
So, compile everything and place your brand new DLL in this folder: C:\Program Files\Blue Prism Limited\Blue Prism Automate .
Don't try to put the file in other folders in your PC or organize the BP folder with subfolders: it will not work and don't argue with me that BP offers this functionality, IT DOESN'T WORK.
I said: IT DOESN'T WORK
I've figured out with Antonio Durante how to overcome this problem but I think i will try this in the next future.
In the code block you just have to write:
Program.Log()
and you will start to find some new rows in your event viewer. It's clear that this is just a little example, you can complicate the things as you wish.
My advice is to create always a BusinessDelegate class that holds all the methods you want to expose and create a single VBO action for every method in the BP, this will enhance testability and maintenance. That's all folks!


Monday, March 26, 2018

How Excel VBO in 5.0.33 destroyed my weekend (again locale issue)

Some weeks ago I’ve discovered that even if BP was updated on customer environment, business objects were not.
So I decided do update the Excel VBO at the last version: 5.0.33 and I started to see something strange.
When I tried to set a formula in an excel cell the formula did not work anymore, indeed Excel was not able anymore to compute the formulas anymore, moreover i had had similar problems with float values.

1. Static code comparison

I decided do drill down the problem and I started from the static comparison of the code between old code and new code.
At first I noticed the approach is more rational than the previous versions, so well done BP Team, but I also discovered that something changed in the actions below :
Get Worksheet As Collection
Old code
 Dim ws as Object = GetWorksheet(handle, workbookname, worksheetname, False)

 ' Do we have a sheet?
 sheetexists = ws IsNot Nothing
 ' No sheet? No entry.
 If Not sheetexists Then Return

 ws.Activate()
 ws.UsedRange.Select()
 ws.UsedRange.Copy()

 Dim data As String = Clipboard.GetDataObject().GetData(DataFormats.Text, True)
 
 ' The data split into rows
 Dim asRows() As String = Split(data, vbCrLf)
 
 Dim table As New DataTable()
 ' Set a flag indicating the header row
 Dim isHeaderRow As Boolean = True
 
 For Each strRow As String In asRows
  If Not String.IsNullOrEmpty(strRow) Then
  
   Dim fields() As String = Split(strRow, vbTab)
   If isHeaderRow Then
   
    isHeaderRow = False
    For Each field As String in fields
     table.Columns.Add(field)
    Next
    
   Else

    Dim row as DataRow = table.NewRow()
    For i As Integer = 0 To fields.Length - 1
     If i < fields.Length Then
      row(i) = fields(i)
     Else
      row(i) = ""
     End If
    Next I
    table.Rows.Add(row)
    
   End If
    
  End If
  
 Next
 worksheetcollection = table
OOB 5.0.33
  Dim ws as Object = GetWorksheet(handle, workbookname, worksheetname, False)

 ' Do we have a sheet?
 sheetexists = ws IsNot Nothing
 ' No sheet? No entry.
 If Not sheetexists Then Return

 ws.Activate()
 ws.UsedRange.Select()
 ws.UsedRange.Copy()

 Dim data As String = GetClipboardText()
 
 worksheetCollection = ParseDelimSeparatedVariables( _
  data, vbTab, Nothing, True)
Set Cell Value
Old code
GetInstance(handle).ActiveCell.Value = value
OOB 5.0.33
GetInstance(handle).ActiveCell.Value = value
Dim activeCell = GetInstance(handle).ActiveCell
SetProperty(activeCell, "Value", value)
WriteColl
Old code
 ' Get to the cell
 Dim ws as Object = GetWorksheet(handle,workbookname,worksheetname)
 Dim origin as Object = ws.Range(cellref,cellref)
 Dim cell as Object = origin

 Dim colInd as Integer = 0, rowInd as Integer = 0 ' Offsets from the origin cell
 
 ' Deal with the column names first
 If includecolnames Then
  For Each col as DataColumn in collection.Columns
   Try
    cell = origin.Offset(rowInd, colInd)
   Catch ex as Exception ' Hit the edge.
    Exit For
   End Try
   cell.Value = col.ColumnName
   colInd += 1
  Next
  rowInd += 1
 End If
 
 ' Now for the data itself
 For Each row as DataRow in collection.Rows
  colInd = 0
  For Each col as DataColumn in collection.Columns
   Try
    cell = origin.Offset(rowInd, colInd)
   Catch ex as Exception ' Hit the edge.
    Exit For
   End Try
   'MessageBox.Show("RowOffset:" & rowInd & "; ColOffset:" & colInd & "; cell: " & cell.Address(False,False))
   cell.Value = row(col)
   colInd += 1
  Next
  rowInd+=1
 Next
OOB 5.0.33
 Get to the cell
Dim ws As Object = GetWorksheet(handle, workbookname, worksheetname)
Dim origin As Object = ws.Range(cellref, cellref)
Dim cell As Object = origin

Dim colInd As Integer = 0, rowInd As Integer = 0 ' Offsets from the origin cell

' Deal with the column names first
If includecolnames Then
 For Each col As DataColumn In Collection.Columns
  Try
   cell = origin.Offset(rowInd, colInd)
  Catch ex As Exception ' Hit the edge.
   Exit For
  End Try
  SetProperty(cell, "Value", col.ColumnName)
  colInd += 1
 Next
 rowInd += 1
End If

' Now for the data itself
For Each row As DataRow In Collection.Rows
 colInd = 0
 For Each col As DataColumn In Collection.Columns
  Try
   cell = origin.Offset(rowInd, colInd)
  Catch ex As Exception ' Hit the edge.
   Exit For
  End Try
  'MessageBox.Show("RowOffset:" & rowInd & "; ColOffset:" & colInd & "; cell: " & cell.Address(False,False))
  SetProperty(cell, "Value", row(col))
  colInd += 1
 Next
 rowInd += 1
Next
I highlighted what I think is the most interesting change: BP decided to use late binding instead of early binding when dealing with read/write operations.
It’s not hard to understand that this seems to be the problem so I tried to discover what are the main differences between early binding in late binding, so I found this article on MSDN which clarifies everything (more or less):
Late binding is still useful in situations where the exact interface of an object is not known at design-time. If your application seeks to talk with multiple unknown servers or needs to invoke functions by name (using the Visual Basic 6.0 CallByName function for example) then you need to use late binding. Late binding is also useful to work around compatibility problems between multiple versions of a component that has improperly modified or adapted its interface between versions 
It’s also true that Microsoft does not encourage this kind of approach, indeed in the same article I read this:
Microsoft Office applications provide a good example of such COM servers. Office applications will typically expand their interfaces to add new functionality or correct previous shortcomings between versions. If you need to automate an Office application, it is recommended that you early bind to the earliest version of the product that you expect could be installed on your client's system. For example, if you need to be able to automate Excel 95, Excel 97, Excel 2000, and Excel 2002, you should use the type library for Excel 95 (XL5en32.olb) to maintain compatibility with all three versions. 

Even if I don’t like this approach because it does not help performances I also don’t understand why it causes the problem I told you before, did you try to wrap reviews issue using visual studio using the involvement code lines from global code of the business objects.

2. Dynamic code comparison

I created a fake process which did read/write on excel and i noticed no regressions in GetWorksheetAsCollection, instead of the other two methods, so i focused on the method below, which is called by WriteColl and SetCellValue and is ultimately responsible of data write in Excel.
Private Sub SetProperty(Instance As Object, Name As String, ParamArray args As Object())
    Dim culture = Thread.CurrentThread.CurrentCulture
    'culture = Globalization.CultureInfo.GetCultureInfo(1033)

    Instance.GetType().InvokeMember(Name, Reflection.BindingFlags.SetProperty, Nothing, Instance, args, culture)
End Sub
I discovered that the problem was regional settings and not the late binding itself, indeedn using these references below i figured out that, when you have to write data on Excel, it's safe to use en-Us locale settings:
  1. How to: Make String Literals Region-safe in Excel Using Reflection
  2. Globalization and Localization of Excel Solutions
So I decided to retest everything with Visual Studio, consider that my culture is IT
Wrong result using current culture
Right result just forcing none culture

3. Conclusion

After this analysis i discovered that, if you are currently on machines with locale different from en-US, you could have problems with data type different from string or text, so you can apply my solution or rollback just the SetCellValue action to ensure you a better weekend.

Monday, March 19, 2018

Playing with RDP and Surface Automation

This is a small blog post about a little trick very useful when you are in supervised-run phase.
Typically in this phase, you run the process in debug mode at full speed on a particular machine in order to discover weak points of the automation and performance issues.
If the automation is design to use surface automation techniques, desktop resolution will become an important factor.
Could sound strange, but sometimes you have to do deal with higher resolutions so I was wondering what could be a method good method to monitor more machines together having a higher resolution and I found an answer in smart sizing.
screen mode id:i:1
use multimon:i:0
desktopwidth:i:2560
desktopheight:i:1280
session bpp:i:32
winposstr:s:0,1,374,70,1206,588
compression:i:1
keyboardhook:i:2
audiocapturemode:i:0
videoplaybackmode:i:1
connection type:i:6
displayconnectionbar:i:1
disable wallpaper:i:0
allow font smoothing:i:1
allow desktop composition:i:1
disable full window drag:i:0
disable menu anims:i:0
disable themes:i:0
disable cursor setting:i:0
bitmapcachepersistenable:i:1
full address:s:vcolmcn13044
audiomode:i:0
redirectprinters:i:1
redirectcomports:i:0
redirectsmartcards:i:1
redirectclipboard:i:1
redirectposdevices:i:0
redirectdirectx:i:1
autoreconnection enabled:i:1
authentication level:i:2
prompt for credentials:i:1
negotiate security layer:i:1
remoteapplicationmode:i:0
alternate shell:s:
shell working directory:s:
gatewayhostname:s:
gatewayusagemethod:i:4
gatewaycredentialssource:i:4
gatewayprofileusagemethod:i:0
promptcredentialonce:i:1
use redirection server name:i:0
drivestoredirect:s:
smart sizing:i:1
The key are those three parameters
desktopwidth:i:2560
desktopheight:i:1280
...
smart sizing:i:1
Don’t ask me why the remote machine has this resolution (2560x1280 vs 1920x1080 of the physical machine) but this way I can’t resize the RDP window with no interference on surface automation.
Share:

Monday, March 12, 2018

Namespace collisions in BluePrism

This will be one of the shortest both on my blog and is about a short and sad story about code refactoring.
I was trying to reorder business objects introducing namespaces, as you already know BP encourage to create more than just one business object per application: the problem is you will commonly use at least one business object which holds the most common actions.
Let’s imagine an application called Generic Bank Web System (yep it’s not an original name) and a VBO called GBW with this actions:
  • Login 
  • Log out 
  • Navigate to function 
  • Check payments 
  • Do payments 
  • Get customer data 
  • Set customer data 
Now we can imagine this place to split this object into three objects, this will help us to keep objects small and maintainable, will also help us to prevent performance issues.
  • GBW 
    • Login 
    • Log out 
    • Navigate to function 
  • GBW.Payments 
    • Check payments 
    • Do payments 
  • GBW.Customer 
    • Get customer data 
    • Set customer data 
You can also choose other criteria when sleeping your dress you can choose to split by functional area, by process etc.
What you cannot do is to name the group exactly as one of the existing business objects because BP is not able to recognize the difference between group and business objects when compiling the process: even if during debug sessions everything seems to work like a charm, when you switch to production and you launch the process from the control room or maybe from the scheduler the result will be an error

Failed to create session on machine... - failed to get effective run mode – Unable to calculate run mode – object GBW does not exist 
KABOOM
Good to go
In this case, the solution is simply to rename the object GBW in GBW.Common, by the way, be careful to include in the common object just the error-proof actions in order to avoid regressions.
Thanks to Antonio Durante for the useful brainstorming sessions … yes! again 😊

Monday, March 5, 2018

An approach to exception handling in BluePrism

One of the things I dislike the most is the fact that when someone approaches RPA is lead to think that he/she doesn’t need any, programming language knowledge: sadly it’s not true.
The proof of that is the poor exception handling I often see: we know from traditional programming that is better to handle exceptions at the higher level and in BP is exactly the same.
Before we have to distinguish between technical and business exception:
  • Technical exceptions are the most common ones and related to temporary application problems (or bad programming). Could be handled locally using the exception block because of these exceptions can be solved simply re-trying a specific part of the process raising the exception only if max number of tentatives is reached;
  • Business exceptions are related to the process logic and most of the times it means that you cannot work the item in the queue, or maybe you just have to skip that particular process step 
Because of when we talk about RPA, we are mainly talking about workflows, if something falls apart the logic we designed most of the time the result will be that the item could not be worked, so there is no reason not to manage them on the main page.
Consider we are always dealing with the producer-consumer model (if you don’t know what I’m talking about read this blog post first ) so in the image below I just added a piece in the producer-consumer model by defining exception types and recognizing it in the main page in order to what is needed according to the exception type.

Exception handling to the higher level
After that, the robot will pick another item from the queue and start working again. Thanks to Antonio Durante for the useful brainstorming sessions which lead us to define new standards

Me, myself and I

My Photo
I'm just another IT guy sharing his knowledge with all of you out there.
Wanna know more?