Almost there.

Posted by – August 7, 2010

I’ve been spending the past few weeks cleaning up and finalising the interface used to instrument Parrot. So here’s a recap of the major changes:

  1. Refactored Instrument.pmc
    Previously, Instrument.pmc serves as the interface to instrument the ops and to attach probes into the various components of Parrot. Refactoring the runcore-related functions out into InstrumentRuncore.pmc allows me to make Instrument.pmc the main interface to create and attach the probes. As of now, it has the following methods: More…

Method madness

Posted by – July 28, 2010

Well, the past two weeks haven’t been very productive. Other than the usual addition of tests, I’m slightly stuck on how I’m approaching instrumenting methods and this post is meant to be a sounding board.

In Parrot, methods of a class can be defined in a few ways. First is by defining a sub in a namespace with the same name as the class and annotating the sub with :method. Second is through the use of the addmethod op. Lastly, it seems all PMCs are classes and PMCs can define methods in C. So at the very least, methods can be either Sub or NCI instances (or rather, invokables), and through the use of the addmethod op, an invokable can be a method to more than one class. So the past week I added support to instrument methods of a class, raising an event whenever any instances of the class invokes the method. What I did not foresee was that due to the ability to “share” methods, an event would also be raised for another class that did not instrument the “shared” method. And this unintended consequence is further exacerbated when the instances themselves are instrumented (I added the ability to instrument on a per-object basis too, although it seems like it’s not working too well now).

So a step back to reflect is in order. I would need to:

  1. Keep a list of classes and instances that instrument any given method.
  2. When the method is invoked, check the invocant against this list and raise the event if required.
    1. The invocant can be found in the CURRENT_CONTEXT (I think)

It sounds like a plan, albeit not very fleshed out. Time to find out if it works.

Week 7: Refactor refactor

Posted by – July 13, 2010

Done this week:

  1. Added in generator scripts.
    cotto suggested it would be a good idea to put the scripts I used to generate the stubs in tools/build, so that got done. That done, I added a way to get at the parameters passed to the vtable and GC functions. The example below show’s what I mean:

    .sub '' :anon :load :init
        load_bytecode 'Instrument/InstrumentLib.pbc'
    .sub main :main
        .param pmc args
        $S0 = shift args
        $P0 = new ['Instrument']
        $P2 = get_hll_global ['Instrument';'Event'], 'Class'
        $P1 = $P2.'new'()
        $S0 = args[0]
        $P0.'run'($S0, args)
    .sub class_handler
        .param pmc data
        $I0 = data['line']
        $P0 = data['parameters']
        $I1 = $P0
        $P1 = $P0[1]
        print 'Line: '
        print $I0
        print ' with '
        print $I1
        say ' parameters.'
        print 'address: '
        say $P1


    .sub main :main
        say 'Done'
    .sub test
        say 'Invoke!'

    Which yields the following output:

    Line: 3 with 2 parameters.
    address: 0
    Line: 25 with 2 parameters.
    address: 1006b0050

    Along the way, I discovered a small bug in src/pmc/pointer.pmc, in line 162:

    return Parrot_sprintf_c(INTERP, "%s", PARROT_POINTER(SELF)->pointer);

    Subtle, innocuous looking code, until you realise that it’s supposed to print an address, not a string.

  2. Partially implemented Instrument::Event::Class
    The code above shows that it is somewhat working. What is missing is instrumenting methods, removing the hooks, and instrumenting dynamically loaded classes.
  3. Massive refactoring of InstrumentGC and InstrumentVtable
    Both InstrumentGC and InstrumentVtable work pretty much the same way. So by refactoring the common code out into InstrumentStubBase, most of the remaining code in InstrumentGC and InstrumentVtable are generated by the scripts tools/build/ and tools/build/ Refactoring the code was somewhat in the back of my mind, and cotto also mentioned it in last week’s meeting with him. Trying to complete Instrument::Event::Class was the thing that spurred this refactoring, as I found out more and more how similar both InstrumentGC and InstrumentVtable is. Along with the refactoring, I took a step back and reevaluated how my mappings for various items were done, well, that got changed too. So did the event dispatcher, again.
  4. Not as massive refactoring of EventDispatcher
    So this is the second week that EventDispatcher got refactored again. So now it can pass events such as Class::Sub::vtable::main::invoke, so that’s category, class name, vtable, vtable group and the item itself, which is more flexible that the previous case which was to split it into 3 components.

What did not get done was most of last week’s goals. I got sidetracked into refactoring and ended up not doing the goals. Furthermore, I found out that I need to take into account inheritance when instrumenting the vtable. As an example, EventHandler extends Sub, so when I instrument Sub’s vtable, the stub entries got imported into EventHandler, leading to fireworks that I least expected. Ah well, at least while figuring it out I discovered how to use XCode’s debugger on parrot. So long command line gdb! And good riddance.

Goals this week will be to complete last week’s goals, since this week, week 8 is supposed to be code review, and most of that got covered last week.

Week 6: Vtable madness

Posted by – July 6, 2010

Done the past week:

  1. Finished up instrumenting GC.
    GC events are now fully instrumented, with the exception of the following 3 functions (is_blocked_mark, is_blocked_sweep, get_gc_info). An example of how to use this is shown below:


    .sub '' :anon :init :load
        load_bytecode 'Instrument/InstrumentLib.pbc'
    .sub 'main' :main
        .param pmc args
        $S0 = shift args
        # Create an Instrument::Event::GC object.
        $P0 = get_hll_global ['Instrument';'Event'], 'GC'
        $P1 = $P0.'new'()
        $P2 = new ['Instrument']
        $S0 = args[0]
        $P2.'run'($S0, args)
    .sub 'do_gc_mark_callback'
        .param pmc data
        $I0 = data['line']
        $S0 = data['file']
        $S1 = data['type']
        print '('
        print $S1
        print ') at line '
        print $I0
        print ' in file '
        say $S0


    .sub main :main
        sweep 1
        say "End main"
    .sub meh
        $P0 = new ['Hash']
        $I0 = 0
          $P0 = new ['String']
          inc $I0
        unless $I0 > 1 goto LOOP
        sweep 1
        say "End"

    Running the command: “./parrot gc_do_mark_sweep.pir gc_sample.pir” will yield the following output:

    (do_gc_mark) at line 19 in file gc_sample.pir
    (do_gc_mark) at line 4 in file gc_sample.pir
    End main
  2. Refactored EventDispatcher.
    Previously, my implementation of EventDispatcher only allows registered handlers for specific events, such as ‘Instrument::Event::GC::administration::do_gc_mark’. With the implementation of InstrumentGC, this design was rather inadequate, as I found out when I wanted to register a handler for all GC events, or a certain subset of GC events.

    So refactoring and changing the internals of EventDispatcher came as a natural consequence of that. For what I wanted to do, I found that splitting the event type into 3 parts, namely Category, Group and Specific would do nicely for all cases that I could think of. To give an example, ‘GC::allocate::allocate_pmc_header’. If I’m only interested in the specific event, I can just register a handler for ‘GC::allocate::allocate_pmc_header’. If I’m only interested in events in the ‘allocate’ group, I can register a handler for ‘GC::allocate’. Similarly, for all GC events, I can register a handler for ‘GC’.

    Thinking ahead for what I want to do in week 7, this dovetails nicely with Classes. As an example, ‘Class::ResizablePMCArray::push’, ‘Vtable::ResizablePMCArray::push_pmc’. On hindsight, maybe refactoring it again to handle more levels would be better, as looking above for the Vtable example, adding 1 more level would allow catching of push_* vtable entries as a group, ie. ‘Vtable::ResizablePMCArray::push::push_pmc’.

  3. Added tests for EventDispatcher.
    Rather self-explanatory.
  4. Added tests for InstrumentGC.
    Rather self-explanatory too.
  5. Initial cut of InstrumentVtable.
    As of now, all vtable entries have a working stub that can be attached and removed at will. What is missing is getting the information about the arguments to the vtable entry. As an example, VTABLE_push_pmc(INTERP, obj, pmc), currently, only the information ‘push_pmc’ can be obtained, with obj and pmc on the way. As an example:


    .sub '' :anon :load :init
        load_bytecode 'Instrument/InstrumentLib.pbc'
    .sub main :main
        .param pmc args
        $S0 = shift args
        $P0 = new ['Instrument']
        $P1 = new ['InstrumentVtable'], $P0
        $P2 = $P0['eventdispatcher']
        # Register a handler
        $P3 = get_global 'class_handler'
        $P2.'register'('Class', $P3)
        # Instrument push_pmc of class ResizablePMCArray
        $S0 = args[0]
        $P0.'run'($S0, args)
    .sub class_handler
        .param pmc data
        $I0 = data['line']
        print 'Line: '
        say $I0


    .sub main :main
        $I0 = 0
          if $I0 > 5 goto DONE
          $P0 = new ['ResizablePMCArray']
          $P1 = box $I0
          push $P0, $P1
          inc $I0
          goto LOOP
        say 'Done'

    Running the command: “./parrot vtable_push_pmc.pir vtable_test.pir” yields the following output:

    Line: -1
    Line: 8
    Line: 8
    Line: 8
    Line: 8
    Line: 10
    Line: 10
    Line: 10
    Line: 10
    Line: 10
    Line: 10

    Since I have not implemented the appropriate Instrument::Event class for Vtable events, the code in ‘vtable_push_pmc.pir’ is rather lower-level than I would have preferred. But it will do for now.

With that, this week I would like to:

  1. Finish up InstrumentVtable
    • Implement the Instrument::Event class
    • Add a way to get at the vtable arguments
    • Documentation + Tests
  2. Do InstrumentClass, InstrumentObject
    • Build on InstrumentVtable to add support for methods.
    • Add ability to instrument a single object.
  3. Start on user documentation.
    There is currently no tutorial or anything on how to use the framework. Write something to show the simple stuff, instrumenting the ops.

Week 5: Unforseen troubles.

Posted by – June 30, 2010

Due to unforseen personal circumstances, I did not manage to do much this week. However, I did manage to finish instrumenting the GC subsystem. So, here’s a short recap:

  1. Updated InstrumentOp.pmc
    After discussion with cotto, I changed the interface slightly, combining 4 methods into 1 as getting the context information again and again is not very efficient.
  2. Added tests for Instrument::Probe
    This is self-explanatory. Tests instrumenting the core ops, dynops, op family and ensuring that the callbacks are really called.
  3. Instrumented GC
    Creating the stub functions for each GC_Subsystem entry was rather tedious, as there was quite a few of them. As of right now, all the GC_Subsystem entries except for is_blocked_mark, is_blocked_sweep and get_gc_info has their own corresponding stubs.

    Each stub does the following,

    1. Call the respective GC function.
    2. Gather the data to send as part of the event to be raised.
    3. Raise the event.

    Furthermore, in addition to inspecting each individual GC_Subsystem entry, I have also grouped the entries into the following categories: allocate, reallocate, free, administration. This grouping is based on what each function is supposed to do, using the Mark and Sweep GC as the basis. The leftover functions are grouped under administration.

    What is not done yet however is the interface for these events, which I’m currently in the midst of. My current problem is getting the EventDispatcher to recognise and dispatch to catchall event handlers, such as Instrument::Event::GC::*. This is needed as for example, wanting to only instrument the PMC related entries of the GC_Subsystem. What I have currently doesn’t allow this. So a rework is in order.

This week I’ll be continuing on the following tasks:

  1. Finish up the event interface for the GC.
  2. Instrument the PMC vtables.

Week 4 Synopsis

Posted by – June 22, 2010

Progress this week was not bad. To recap, here’s what I got done this week:

  1. Handle exit/unhandled exceptions.Whenever the exit opcode is found, Parrot will throw a CONTROL_EXIT exception which will propogate up the exception handler stack to the first exception handler which can handle it. In most cases, this will be the C exception handler created before entering the runloop in the function runops (see src/call/ops.c), and trigger the interpreter cleanup routines. Exiting in this manner will not allow the instruments to be finalized. So by inserting another C exception handler after this point, such exceptions can be caught and the runloop can exit normally, allowing the instruments to be finalized. As a plus point, since C handlers act as a catchall for exceptions, any unhandled exceptions will also be caught, again, allowing the instruments to be finalized.To this end, I also modified the Parrot_runloop struct (see include/parrot/call.h) and the appropriate routines to have a reference to the thrown exception, allowing the C exception handler to know what was thrown.
  2. Cleanup the hook tables on destroyThis was listed as todo in the source, so I did it by first creating a new function to delete a list and then calling that function for each hook list.
  3. Update the op callback interface.Previously, on calling the callback, 3 parameters were passed. The parameters were, the current relative PC (Program Counter), a ResizablePMCArray containing the op number and its arguments, and the Instrument object itself. This wasn’t very convenient, as to get the information about the op, one has to get the OpLib instance and then obtain the OpCode instance for the op in question.So I reworked it instead, creating an InstrumentOp dynpmc which conveniently allows looking up most of the information required from it, along with the file, line and namespace the op is in (accuracy of this is subject to the core’s ability to obtain the info). Getting the file, line and namespace wasn’t particularly hard. With the initial guidance from cotto, I managed to trace it all the way to Parrot_Context_get_info (see src/sub.c), which conveniently is already marked PARROT_EXPORT and provides the information I wanted in the form of the Parrot_Context_info struct.

    Additionally, in the process of fleshing out the InstrumentOp dynpmc, I also removed a todo item with regards to the special ops which have variable arguments (set_args_pc, get_results_pc, get_params_pc, set_returns_pc), which up until then, were simply ignored.

  4. Hooks on dynops.This was listed as todo in runtime/parrot/library/Instrument/Probe.nqp previously. In the process of designing tests for the class Instrument::Probe, I got sidetracked into revisiting this todo. Now, upon detection of any dynop libraries, all probes that have pending op hooks are disabled and reenabled, and in the process attaching hooks to the relevant dynop if found. The tests for Probe.nqp and EventLibrary.nqp are still pending, as I’m looking at whether I can improve on the Instrument::Probe and Instrument::Event interface more, namely adding methods to get the current state of the objects.

With reference to the previous post, I did not manage to complete the following two tasks:

  1. Adding tests for Probe.nqp and EventLibrary.nqp
  2. New events (sub call, class events, exception events)

I hope to be able to complete these two tasks by Thursday (June 24th).

For this week, I would like to complete the following:

  1. Add events for GC.This is mostly replacing the entries of interp->gc_sys with appropriate stub methods that will raise an event. Hopefully it shouldn’t take that long (Prays for no crashes, since raising the event will invoke the GC itself, methinks).
  2. Dynamically remap dynops (Internally, nothing to do with the dynop_mapping branch).To date, I have not been able to successfully run tracer.nqp on perl6.pbc, at least as far as getting to the REPL prompt. This is mostly due to dynops (I think), since dynops used the perl6.pbc are dependent on the order that the dynop libraries are loaded during the compilation of that PBC. I have somewhat of an idea on how to approach this, but have not probed enough to know more details of it (namely in what order were the dynop libraries loaded, and how to remap the dynops to the op table). Hopefully it will be successful.
  3. Prepare to instrument the PMC Vtables.
    After discussing with cotto, it would be better if I do this first while waiting for the dynop_mapping branch to merge.

Week 2 + Week 3 Synopsis: Dynlib Digressions

Posted by – June 15, 2010

The process of loading a dynop library is seemingly straightforward. Simply using the “.loadlib” directive or the “loadlib” opcode in PIR will result in the library being loaded and integrated into the interpreter environment. That is, until you have more than 1 interpreter in play, or, the bytecode that you load has different opcode numberings that what is currently reflected in the interpreter.

Parrot’s solution to the former is to simply disallow loading dynop libraries when there is more than one interpreter in play. There exists a certain assumption on how interpreters are created, in that, there is only one main interpreter, and additional interpreters are spawned as ParrotThreads (at least that is how I understand the code). Creating interpreters in the model above will yield the expected behaviour, in that everything works normally when the main interpreter will preload everything that is needed by the threads, thus obviating the need to load of dynop libraries in the threads. However, if one creates an additional interpreter outside this model, it breaks the assumption and one can load dynops in the second interpreter. The main reason for this is that only the first interpreter is registered in the “interpreter_array”, while the second is not. Since the code to detect if there is more than 1 interpreter depends on the “interpreter_array” and the associated “n_interpreters” count, the second interpreter is non-existent in the point of view of the core.

With regards to dynops, in the process of updating the core op tables, there is a high possibility that pointers to the various op tables change to point to new locations in memory. With 1 interpreter, everything is fine, as the op table references of that interpreter is updated accordingly. Adding a second interpreter to the mix will result in a segmentation fault when the second interpreter tries to access the outdated references to the op tables. There has to be a way to detect or notify any changes to the op tables to all interpreters currently existing.

Trying to change the code in the core to achieve this is an exercise of frustration. After trying out a few things, namely forcibly registering all interpreters by modifying Parrot_cx_init_scheduler and broadcasting messages in dynop_register by using Parrot_cx_broadcast_message and trying to make sense of the failures I’m getting (mostly assertion failures), I discovered a few things. First, I discovered that there are no GC runs after a thread is created (if I understand correctly, see src/pmc/threadinterpreter.pmc). Apparently its not really stop the world, more like stop the GC and I don’t see anywhere where the GC is reenabled. Second, there is no simple way to halt a thread and know that it is halted. I can broadcast messages to all interpreters for all I want, but unless I know for sure that all interpreters have stop, it is dangerous to go ahead with updating the op tables. One way I wanted to try out was to use pt_thread_wait and pt_thread_signal, but that got blocked as I can’t get those symbols exported (putting them in thread.h doesn’t help, marking them as PARROT_EXPORT also doesn’t help). Since that didn’t work, I’m stopping this experiment as it has taken way too much time and its not part of my objectives.

All that effort is not wasted though. Now that I understand more of how the internals work, I could do my own DIY dynlib detection. In the same vein, I found the reason for  a segmentation fault that I encountered when I try to run “tracer.nqp” against “examples/pir/io.pir”. I worked around that by simply making the child interpreter’s vtables point to the parent’s instead. Digging into the internals these past week, I think I figured out why. So apparently, NQP loads the OS dynpmc. “examples/pir/io.pir” also loads in the OS dynpmc. So at this point, there are two different entries in two different vtables with different base_type numbers. For normal dynpmcs, this is fine as each interpreter will only look at their own vtable entries. However, the OS dynpmc is a singleton. Thus, when NQP loads it, an instance is created and stored. When io.pir loads it again, the stored instance is then saved in the MRO (method resolution order) list of io.pir’s copy of the OS vtable entry. So, io.pir’s OS vtable entry’s index is around 85, but the one in its MRO list is about 101, and this difference leads to a segfault when trying to invoke a method of OS.

Back on track, I do my dynlib detection in the Instrument dynpmc itself, in the form of the function detect_loadlib. Since I discovered that each interpreter stores a hash of the dynlibs it has loaded in the iglobals hash, detecting newly loaded dynlibs is simply comparing an old list of loaded dynlibs with the current list. So, now the Instruments dynpmc can detect all dynlibs loaded through “.loadlib” directives and “loadlib” opcodes, and creates the task/event “Instrument::Event::Internal::loadlib” which can then be handled in PIR. With that in place, I can now proceed to go on and implement more events to raise.

Currently, I have three events, which are Instrument::Event::Internal::loadlib, Instrument::Event::Class::instantiate, Instrument::Event::Class::callmethod. An example script to use these events is shown below (in NQP):

	load_bytecode 'Instrument/InstrumentLib.pbc'

sub loadlib_cb ($task) {
	my $data := Q:PIR {
		find_lex $P0, '$task'
		%r = getattribute $P0, 'data'

	say('Library loaded: ' ~ $data[0]);

sub class_cb ($task) {
	my $data := Q:PIR {
		find_lex $P0, '$task'
		%r = getattribute $P0, 'data'

	say('Class instantiated: ' ~ $data[0]);

my $args := pir::getinterp__p()[2];

my $loadlib_evt :=;

my $class_evt :=;

my $instr := Q:PIR { %r = new ['Instrument'] };

$$args[0], $args);

Running this against “./examples/pir/io.pir” yields:

Library loaded: io_ops
Library loaded: os
Class instantiated: OS

Well, it’s not much but its a start. Not too mention I’ve fixed most of my crashes already.

4th Weeks Tasks:

  • Rethink the callback interface: Passing so many things to the callback is not very fun.
  • Class events: Add ability to inspect per class.
  • Internal events: Exceptions, per sub calls.
  • Tests for Probe, EventLibrary: Now that the interface is mostly settled, start writing those tests.
  • File, line information: Figure out how to get file and line number information. Look at the profiling runcore/unhandled exceptions/etc. I think those print out that info.

Square one + a bit.

Posted by – June 4, 2010

During my meeting with cotto on Tuesday (my time), he suggested looking into replacing entries within the op_func_table on the instrumented interpreter (CHILD). So the past few days, I’ve been working on it. The initial plan was to create a copy of that table, replacing ops that have hooks attached with a stub function that will fire the hooks before executing the op itself. The operation itself went swimmingly, until the monster known as “dynops” popped up to say “OHAI!”.

Before I go into the problems I face with dynops my initial approach, I have to say that cotto was right in that this approach will have better performance characteristics that what I was doing earlier, which was to store the hooks in a ResizablePMCArray which holds the hooks in a Hash. That was a bit of a cop-out on my part, as that was the path of least resistance in terms of adding new  and removing hooks. Doing it in that manner simplifies the code for adding and removing hooks, at a cost of going through more levels of VTABLEs. With that, I’ve changed the data structure to hold the probes. It is now an array of linked lists, with a 1-1 mapping between each linked list and op. With this major change, running the example code [0] against examples/pir/mandel.pir yields approximately 5% performance increase (user 16.077s vs user 17.001s previously).

Back on topic, so the plan is to create a copy of the op_func_table, instrument it and change CHILD’s to refer to it instead of the core op_func_table. This was rather easy. Then, dynops dropped by.

With dynops in the equation, it did not turn out easy. Problem is, if the instrumented table is not switched back to the core table, the core table will end up pointing to the instrumented table instead at the end of dynops_register (see src/runcore/main.c). Also, how is the supervising interpreter (PARENT) supposed to know when new dynops are loaded in the CHILD? The PARENT has to know when the core op_func_table is changed, so that it can update its own references accordingly. Currently, dynops_register gets around this problem by prohibiting loading dynops when there is more than 1 interpreter in play.

So, hold on. “more than 1 interpreter in play”. But there are two interpreters, PARENT and CHILD! Apparently, creating interpreters through Parrot_new will only register the first interpreter in the list of interpreters held by core. Thus,

Parrot_Interp PARENT = Parrot_new(NULL);
Parrot_Interp CHILD    = Parrot_new(PARENT);

PARENT will be registered, as traced through Parrot_new -> Parrot_cx_init_scheduler -> pt_add_to_interpreters (I might have missed some steps in between).

However, CHILD is not, as in Parrot_cx_init_scheduler (see src/scheduler.c), since CHILD has a parent interpreter, pt_add_to_interpreters (see src/thread.c) is not called. So, as far as core is concerned, there is only 1 interpreter in play, and that interpreter is PARENT. Thus that is why the check “n_interpreters > 1″ in dynop_register fails, and how I inadvertently was able to run 2 interpreters and have dynops loading, although it segfaults later on when PARENT tries to access its outdated tables. If CHILD is manually registered by calling pt_add_to_intepreters, then that is a no go, as then both PARENT and CHILD cannot load any dynop libraries. Given that CHILD has to be able to load dynop libraries, I’m at an impasse.

To recap, this is the problem at this juncture, if CHILD loads a dynop library, the instrumented table must be swapped out and replaced with the core table. After loading is complete, the instrumented table is then updated and swapped back in. After all that, PARENT must be notified of this loading so that it itself can update its own table references.

In my previous post, I mentioned making use of the events system to detect when CHILD is going to execute a “loadlib” op and to raise an event that PARENT can handle and do the necessary stuff. But that won’t work for “.loadlib” directives (see [1]), as that is not an op. So scratch that idea for now (although, I think it is a good idea for normal “loadlib” ops). Digging into the events system, I noticed that it already does some of what I was doing, in that, to check for events, a copy of the core op_func_table is made (see include/parrot/interpreter.h , interp->evc_func_table), and all entries in that table points to the “check_events__” op. This table is also helpfully taken care of in dynop_register, in that it helpfully extends it whenever new dynops are added.

Then I realised something, I’m going to create a copy of the op_func_table, and replace each entry with a stub function. This table will then be used for op lookup and execution by the runcore. And I control CHILD’s runcore. Why don’t I move the call to the stub function in the runcore itself. So that’s where I’m at now, square one + a bit.

To get around the problem brought about by dynops temporarily, I simply add a check in the runcore to update PARENT’s op tables whenever it does not match CHILD’s op tables. I did some thinking on this, maybe make dynop loading a STOP THE WORLD event, just like GC. This I think can be done in dynops_register, broadcasting to all to halt and when all is halted, proceed to load the dynop library before broadcasting again to do the necessary updates before resuming itself. This I will revisit again later, seeing that I’m running out of time for this week’s tasks, which is, to recap, bug hunting and squashing of tracer.nqp, implementing the event notifications library and tests.

So bug hunting. Apparently I did something and now I can obtain and print out the STRING KEY constants. But the segfault with example/pir/io.pir remains when I try to trace it using tracer.nqp. Running it under a debugger shows that it segfaults when trying to access INTERP->vtables[SELF->vtable->base_type]->_namespace (in src/pmc/default.pmc line 549). Before this line, control is in find_method_direct_1 (in src/oo.c line 1051), where the variable _class is being queried for its namespace. Only problem is, the base_type of _class is 101, which is not a valid number given that at that point in time, there is only about 87 pmc types.

Running simple-tracer.pir [0] on it has no problems, of which I can only suspect that a GC run happened and the object got cleaned up, so bad data was being read. This would make sense given tracer.nqp is written in NQP and it does lots of string concatenations, so that should cause the GC to run more frequently. Now, if there’s a tool that I can use to confirm this… Oh wait, I’m supposed to be making those tools.. GAH!

[0] Simple-tracer.pir (Edited 13/06/2010)

.sub '' :load :init :anon
    load_bytecode 'Instrument/Instrument.pbc'
    load_bytecode 'Instrument/Probe.pbc'

.sub 'main' :main
    .param pmc args

    $P0 = shift args

    .local pmc pr, in, probe_class
    probe_class = get_hll_global ['Instrument'], 'Probe'
    pr = new probe_class

    in = new ['Instrument']

    $S0 = args[0]
    in.'run'($S0, args)

.sub 'cb'
    .param int pc
    .param pmc op
    .param pmc instr

    print 'Op: '
    $I0 = op[0]
    say $I0


Week 1 Synopsis

Posted by – May 31, 2010

Last week marked the start of the GSoC coding period. Prototyping during the bonding period helped me in creating an initial version of my framework and currently, I’m focusing on creating an interface to inspect each opcode that is being executed. This interface is the class Instrument::Probe [0]. Initially, this class was written in PIR. It quickly became rather untenable, with labels and jumps all over the place. My mentor, cotto, suggested implementing it in NQP instead, and that suggestion has proven to be a godsend. The code is cleaner, and it’s easier to follow, not to mention adding new features to it.

As of now, the class Instrument::Probe allows the creation of probes that inspect the opcodes being executed. Each probe has its associated callback and a list of opcodes that it is inspecting. Whenever the runcore encounters any of these opcodes, it will call the associated callback. This system is rather flexible in that probes can be enabled or disabled dynamically, and multiple callbacks can be associated with a single opcode. With that in mind, each probe also has an associated finalize callback which is called only at the end of execution and only if the probe is enabled at that point.

With that in mind, I’ve implemented a simple tracer example (examples/library/tracer.nqp [1]) that tries to mimic the output of the tracing runcore. It mostly works alright and is rather slow, which is a given seeing how much work it is doing per op, both at the PIR (or rather NQP) and the runcore levels. Not to mention its tendency to segfault every now and then, which I’m still in the process of tracking down. One current segfault happens when running it against examples/pir/io.pir, and it doesn’t happen when a simpler tracer that only prints the op number is used. Further testing shows that it seems to happen on the opcodes “get_results” and “set_results”, although I’m not quite certain why, probably because I’m accessing something the wrong way. Bug squashing is not a fun activity.

For this week, I’m working on to solve a few problems in addition to tracking down the segfaults above. First of which are dynops. This past week has seen major changes to the op libraries within parrot, due to “ops_massacre”. These changes made me think about dynops and how it will affect my project. Currently, the runtime library does op lookups using the OpLib and OpCode pmcs, and this lookup is done before the code is loaded/compiled. Due to this, the opcodes contained in the dynop libraries will be non-existent from the point of view of the library . One way i’m thinking of to solve this is to defer registering hooks for dynops, trying whenever new dynlibs are loaded. This can be done with a simple probe that looks for the opcode “loadlib” and raises an event when that opcode is encountered (I’m assuming “.loadlib” and “loadlib” works the same way). Which brings me to the question of is there an event system I can use? Simple tests and some digging (t/pmc/scheduler.t and PDD-24) showed me how to use Parrot’s event system and I think I can utilise this to implement the above and to serve as the basis for the other events such as PMC creation and GC mark cycle, barring any unforeseen issues.

To recap, I spent most of this week learning just enough NQP to reimplement the runtime library in NQP and chasing down bugs uncovered by the simple tracer example. This coming week will be spent on squashing the segfaults the tracer produces and make it faster and implementing events and a runtime library to abstract it away. And I should start on the tests too.



Posted by – May 17, 2010

In Parrot, opcodes are currently implemented as a standalone piece of code that is invoked whenever its corresponding opcode number is encountered in the bytecode. Each opcode is described internally through the use of an op_info_t struct, which, taking from <parrot/op.h> is the following:

typedef struct op_info_t {
    const char    *name;
    const char    *full_name;
    const char    *func_name;
    unsigned short jump;
    short          op_count;
    arg_type_t     types[PARROT_MAX_ARGS];
    arg_dir_t      dirs[PARROT_MAX_ARGS];
    char           labels[PARROT_MAX_ARGS];
} op_info_t;

These descriptions are stored as an array, with the index of each entry being the opcode’s own opcode number. This array is pointed to by a field within the interpreter struct, the field being “op_info_table”. This is just for the descriptions of the opcodes. In Parrot, currently each opcode is defined in a C-ish language that gets parsed into C source code, with each opcode constituting a function that satisfies the following function prototype:

typedef opcode_t *(*op_func_t)(opcode_t *, PARROT_INTERP);

Similar to the opcode descriptions, these opcode function pointers are stored as an array, with the opcode’s number being the index to the position of the opcode’s function pointer. This table is pointed to by the field “op_func_table” within the interpreter’s struct.

So what happens during execution? A pointer, which is the program counter (PC), is passed to the runcore’s runops function. This runops function is the one that looks up the function pointer for the current op pointed to by the PC and calls that function for execution.

The code for an opcode will do a number of things. First, it will grab the current context of the interpreter. The current context consists of a number of items, but chief and most important is that the current context holds the current values of the registers (Parrot being a register-based VM). With this context, it will then grab the required values from certain registers and after performing the required logic, will write values to a certain register. After that, it will advance the PC by a certain number, which generally is (PC + 1 + NO_OF_PARAMS)).

So how would you know which registers the parameters are in? The PC actually points to somewhere within the bytecode. Generally, the PC will point to the start of an opcode. If an opcode has no parameters, advancing the PC by 1 will get us to the next opcode to execute. However, if the opcode has parameters, advancing the PC by 1 will get us the register or INTVAL constant for the first parameter. Similarly, advancing by 2 will yield the register or INTVAL constant for the second parameter.

This is all well and good except for the fact that Parrot allows the loading of dynamic op libs, which extends the capabilities of the VM by providing additional ops, similar to how MMX/SSE/SSE2/etc extends the capabilities of an x86 processor. Parrot also shares the two tables “op_info_table” and “op_func_table” between interpreters, which does make sense, as having duplicates of these tables can be expensive memorywise, given that Parrot already has more than 1000 core opcodes.

Handling these two issues won’t be trivial. With regards to dynamic op libs, there needs to be a way to detect when the library being loaded is a dynop library. One way is to trap loadlib opcodes and then peek to check if the op tables change. It would also be good to investigate if we can duplicate any shared tables such that the interpreter running the instruments has its own private tables that will be untouched by any changes to the tables of other interpreters.