Quenlig: Online Questionnary

This documentation can be read from begin to end, it starts with the most common pattern usages and also contains links to full technical references. As the system is easely extensible in many ways it is adaptable to any usage case.

What's Quenlig?

Quenlig allow to create interactive "free text" questionnaries on the web.

Running Quenlig

You can try a french unix questionnary at this place (you need to enter an alias): http://demo710.univ-lyon1.fr/-quenlig-/guest.html

To run your own server, use the following suite of commands:

# Create in 'Students' directory a new questionnary session.
# The questions are in the directory 'Questions/unix'
# When running the server will be listening on TCP port 9999
./main.py Unix2010Spring create Questions/unix 9999
# The guest user named 'admin' will be session administrator.
./main.py Unix2010Spring admin guestadmin
# Start the server, it displays the URL to use to access it.
./main.py Unix2010Spring start

If you use CAS authentication, you can configure Quenlig to use it.

Questionnary creation

The questionnaries are not defined as data structures but as Python programs. This is much more flexible approach, it allows to have algorithms creating the question and checking the answers.

The questionnaries are defined in 'Questions' directory. The 'C_variable' directory contains a small commented questionnary. The questionnary directory must contain an empty __init__.py file to please Python.

The filenames used to contains the questions are visible by the students. The filename must be short, it defines in one word the subject or domain of all the questions it contains. This name is important, it helps the student to see the context of the question.

Each file contains questions, the recommended way to create questions is to start from the most simple ones. When a complex question is defined, it requires that the student answered correctly a set of other questions. This set of other question ensure that the student has the needed knowledges to answer the complex one.

When Quenlig displays a question, it displays the information about the required questions. This will give the student a context to answer the current question. It will see its old answers and may reuses parts of them to answer.

Question creation

The file containing the questions should start by

# -*- coding: utf-8 -*- 
from questions import * # all the needed functions
A minimal question is:
add(name="hello",
    question="Are you ready?",  # Question text is in HTML
    tests=(
           Good(Equal("yes")),
          )
    )

If you want to add a comment about the student answer:

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(Comment(Equal("yes"),
	                "Hit return to see the next question")
               ),
          )
    )

This question requires that the student answer exactly 'yes' and not 'Yes' or 'YES'. To turn arround this problem, the student answer can be uppercased.

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(UpperCase(Equal("yes"))),
          )
    )

If you also want to allow the answer 'sure' and the digit '1'

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(UpperCase(Equal("yes")
	                  | Equal("sure")
			  | Equal("1"))),
          )
    )

If you don't like complex expression, it is possible to write:

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(UpperCase(Equal("yes"))),
           Good(UpperCase(Equal("sure"))),
           Good(Equal("1")),
          )
    )

The tests are done from the first to the last until a test tell that the answer is good or bad.

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(UpperCase(Equal("yes"))),
           Bad(UpperCase(Comment(Equal("no"),
                                 "Are you joking?")),
           Good(UpperCase(Equal("sure"))),
          )
    )

If a student answer 'Yes I am' its answer will be rejected, we should not test equality, but only that its answer contains 'yes'.

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(UpperCase(Contain("yes"))),
          )
    )

There is no help if the students answers 'foobar', it is not nice for the student. The tilda indicates the negation, so the student answer does not contain 'yes'.

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(UpperCase(Contain("yes"))),
           Bad(UpperCase(Comment(~ Contain("yes"),
                                 "Please answer 'yes' or 'no'")
	                )),
          )
    )

Another way to add help is to allow the student to ask tips. Most of the time it is better to give a good bad answer comment than to add tips. The students do not know when to ask tips, some always ask them, some never. And they are frustrated when the tip was yet given in a bad answer comment or when it is as lame as « Read the manual »

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(UpperCase(Contain("yes"))),
          ),
    indices = ("You must answer a 3 letters english word",
               # The tip are in HTML
	       "Enter <b>yes</b> if you are ready.",
	      ),
   )

Or a more simple way to do all this is to use a base test that does all this work:

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(Yes()),
	   ),
    )

It is possible to check the number of bad answers in order to give a tip to the stucked student.

add(name="hello",
    question="Are you ready?",
    tests=(
           Good(Yes()),
           Bad(Comment(NrBadAnswerGreaterThan(3),
                       "Stop playing, and start working. Answer: YES")),
	   ),
    )

Now if the student answered this first question, a second question must be available. With the following test, any number greater or equal to 18 are a good answer. If the student enter a string that is not a number, a comment will explain that an integer answer is required.

add(name="age",
    question="How old are you?",
    required=["hello"],
    tests=(
          Good(IntGT("17")),
	   ),
    )

Do not forget, the order of the tests is important, the bad test will never make it if you put it after the good one:

add(name="age",
    question="How old are you?",
    required=["hello"],
    tests=(
          Bad(Comment(IntGT("100"), "Your real age please.")),
          Good(IntGT("17")),
	   ),
    )

An another problem is that students may append 'years' to their answer. It is possible to get ride of it by using a replacement. The 'Replace' operator apply all the string substitutions to the student answer before continuing the testing.

add(name="age",
    question="How old are you?",
    required=["hello"],
    tests=(
          Good(Replace((
                        (' ', ''),
                        ('years', ''),
                       ), IntGT("17"))),
	   ),
    )

If you also want to accept '20 Years' with an uppercase then there is a little problem because canonizers are not applied to Replace parameters. It is done so because for most of the canonizers there is little sense to apply to string parts. The correct way to do the test is presently:

add(name="age",
    question="How old are you?",
    required=["hello"],
    tests=(
          Good(UpperCase(Replace((
                        (' ', ''),
                        ('YEARS', ''),
                       ), IntGT("17")))),
	   ),
    )

It is possible to display and check random questions:

choices = {'VARIABLE' : ("a", "b", "c"),
           'VALUE'    : ("100", "2", "333", "42"),
          }
add(name="affectation",
    question=random_question("How to store VALUE in the variable VARIABLE?",
                             choices),
    tests=(Good(Random(choices,
                       Replace(((' ',''),),
                               Equal("VARIABLE=VALUE;")))
                      )
               ),
           Random(choices, Expect("VARIABLE")),
           Random(choices, Expect("VALUE")),
           Expect("="),
           Expect(";"),
          ),
    )

Question reference

There are all the possible arguments of the 'add' question function.

Parameter: name

It is the question name, keep it short and avoid special characters, especially '/', ':', '(', ')'. It may contain spaces. It is the name displayed in the question list, so it must not clearly give the good answer.

Parameter: required

It is a list of required question name that must have been answered in order to see this one. If this parameter is not provided then the preceding question in the file is considered as required. It is a bad idea to do so because a change in the questions order will break the dependencies tree. If there is not preceding question, then there is no dependencies and the question will be visible immediatly.

If a question name contains : then it references a question in another file of the questionnary. The left part is the filename without the .py extension.

If the question name is followed by (regular expression) then the student answer must match the regular expression. For example personnel:sexe([mM]) will require that the answer given to 'sexe' question in the 'personnel.py' file is 'm' or 'M'.

If the name contains:

Parameter: before

Defines the text displayed before the question itself. It contains general explanation to help the student or things that must be done or checked BEFORE reading the question itself.

The value of this parameter can be an HTML string or a function returning an HTML string. The function may have a 'state' parameter allowing to get information about the student or the IP address of its computer.

If in the function, random function are used, it will always return the same random serie. Each student will have a different random serie.

To put 'my_picture' in the HTML, store the file in 'Questions/YourQuestionnary/HTML/my_picture.png' and give the link: <IMG SRC='my_picture.png'>

If you want to add a CSS style to your questionnary, put it in 'Questions/YourQuestionnary/HTML/questions.css'
You can also redefine existing translations, for example :

DIV.question_good > A:first-child.content > .box_title:before {
content: "Good! Press «Enter» to choose a random question." ;
}

If you want to add JS to your questionnary, put it in 'Questions/YourQuestionnary/HTML/questions.js'

Parameter: question

It is the question text, it is defined exactly as the 'before' parameter and so it may be generated specificaly for each student.

It is possible to display check buttons in place of a free text input. To do so, in your question text, use {{{an answer}}} to indicate the possible choices. For example:

question = "Are you {{{M}}} male or {{{F}}} female?"

The possible student answer may be ' M' or ' F' or ' M F'.

If you want radio button and not a checkbox, use '!':

question = "Are you {{{!M}}} male or {{{!F}}} female?"

If you want to randomize the order of the choices, then put {{{ shuffle}}} as a choice. Il will not be displayed. Beware: ' M F' and ' F M' are possible when shuffling.

If you want to put a choice on the same line than the previous one, use '↑': {{{Y}}} Yes or {{{↑N}}} No.

If you want to put a text without a button, use '{{{}}}':

{{{}}} If you are sure: {{{↑Y}}} Yes, or {{{↑N}}} No
{{}}} If you hesitate: {{{↑None}}} not Y nor N, or {{{↑B}}} bad question

Parameter: tests

The list of tests to be done on the student answer. If this parameter is not defined then the student will see the question but will not have a way to enter its answer.

There is many tests in the test reference documentation. These tests are not all of the same kind:

Parameter: indices

A list of tips given as HTML strings. They are given one by one when asked.

Parameter: nr_lines

An integer defining the number of lines expected for the student answer. The default value is one.

Parameter: default_answer

A string that defines a default answer. The student will modify this default answer in order to answer the question.

It can also be a function with a state parameter and returning a string.

Parameter: highlight

If this parameter is True then the question is highlighted in the questions list. The student should answer to this question quickly because it is very contextual with its last answer.

Parameter: maximum_bad_answer

Specifies the maximum number of bad answer allowed. When the student has given the maximum bad answers the question will no more be in the question list. The student will not be able to answer the question.

Parameter: maximum_time

Specifies the maximum time in seconds to answer. Once the question on screen the count down start. It does not stop if the student look at other question or close the window.

Parameter: good_answer

This HTML text will be displayed to the student when it answers correctly.

Parameter: bad_answer

This HTML string will be displayed to the student each time a bad answer is given. It will be displayed even if a specific bad answer comment is also given. So it is not a good way to do the feedback. It is not recommended to use this parameter.

To add a comment on non commented bad answer a better way is to add this last test to the 'tests' parameter : Bad(Comment(Contain(''), "The comment"))

The usage of Quenlig shows that generic bad answer comment does not really help the student.

Parameter: courses

By default, all the questions with a before attribute and with courses defined are displayed by the 'course documentation' plugin.

To structure the courses, the courses parameter defines the 'before text' position in the table of content: ("2: Bases", "2.1: Types", "2.1.2: Floats") The table of content is sorted by titles. If multiple questions are in the same part, they are sorted using the required attribute. The numbering is optional, but it allows you to sort the items.

Parameter: perfect_time

If the question is answered in less than the specified number of seconds, then the question is «perfectly answered». It is only useful with the competences plugin because it allows to retry to answer and display the number of perfect answers.

The default value is 10 seconds

If this time is too short, QUENLIG will automaticaly increase it to the median value of the time to make a good answer by the current session students. The time of the first good answer is not taken into account because it is far more longer than the next ones.

Session Management

Quenlig management is done with the ./main.py command. If you run it without parameter, it will display the help about the command line parameters and the list of sessions with the questionnary name, the TCP port used, the process and computer running it, the session start date and duration.

The first parameter of ./main.py is the session name. It is the name of the sub directory of 'Students' that contains all the informations about the session and the students. The following parameters defines actions and may be concatened. The most useful actions are:

A user with the admin rights will use the web interface to take the Teacher role and see all the informations about the student group. The user interface is self explanatory, use it to see the possibilities.

Student Management

The ACLS (access rights and much more) define the things the user is allowed to do when it is using Quenlig. The ACLS of an user is defined by the ACLS of the current role of the user and then by the ACLS specifics to the user.

The roles are defined exactly the same way than users, there is no difference, a user name can be used as a role name. You can add as many roles as you want. Two roles are hardcoded into Quenlig:

You can edit ACLS and role directly in files. For each student and role there is a directory named Students/YourSessionName/Logs/TheStudent containing:

The modifications of Default or Teacher ACLS will propagate to all the users using these roles. If the files are modified manually Quenlig server must be restarted to make the change effectives.

Access Rights (ACLS)

A Quenlig page is created by the composition of many plugins. The ACLS defines for each plugins the operations allowed on the plugin. Currently the only operation is executable, in the future other operations may be added as movable or hidable.

To change ACLS: display the students list, click on a student or a role and click on 'Edit ACLS'. Then you can for each plugin indicate if it is allowed, rejected or inherited from the current role. ACLS modification have immediate effect and are stored in files.

There is an example of YourSession/Logs/Default/acls file:

# The file contains a Python dictionnary.
{
  # Allow the students to modify their good answer if they want.
  'question_change_answer': ('executable',),
  # Allow the students to connect at any time
  # This ACL change is commonly done for one student,
  # needing to work out of the normal hours.
  'session_start': ('!executable',),
  'session_stop': ('!executable',),
  # do not display session duration it is not applicable.
  'session_duration': ('!executable',),
  # The smileys and rank are only useful when all the student
  # are working at the same time.
  'statmenu_smiley': ('!executable',),  
  'statmenu_rank': ('!executable',),
  # Do not display the number of bad answers.
  'statmenu_bad':('!executable',),
  # Do not shuffle the questions.
  'questions_shuffle': ('!executable',),
}

You can find the plugin names in the plugin reference page. Another way is to take the Teacher role and click on Debug. A star will be visible next to each GUI element, by putting the cursor on the star you will find out the name of the plugin displaying the GUI element.

Author work

The questionnary creation takes a long time, about one hour per question if you want to help the students. It is recommended to put a minimal amount of time for the question creation but to take more time after each session to analyse the bad student answers.

With the teacher role, for each question all the bad answers and comments will be displayed. The identical bad answers are merged and there is the list of student having given this bad answer.

The bad answers that where not commented to help the student are with a red background. If there is more than one student giving this bad answer, an explanation must be given to the student.

When adding tests to a question, the recommended way is to put all the Good tests in front and the other tests after. It is a bad idea to add a Reject in front of the others because one of the Good tests after may use the rejected string. If there is many Good tests, it is easy to miss the problem.

The questionnary author can modify the Python source defining the question and then clicks on « Reload question » to update its page with the new question. It will verify the behaviour of the updated question. It will see the comments given to the students have been correctly updated.

Currently, if the question change introduced a Python syntax error then all the questions defined in the file are no more accessible by the students. So, think twice before using this feature when tens of students are working.

Test creation

A test is a class with the following methods and attributes. The class hierarchy of test is defined in the test reference documentation. Only leaves and canonizers are explained here, see the source codes for the other kinds.

Leaves of the test expression

Most of the leaves are based on string tests, the TestString base class is here to help, it provides the canonization (disablable) of the test parameter in self.string_canonized. The only thing to do to create a new test is to define the do_test method that is returning a boolean and a specific comment for the student.

class Contain(TestString):
    def do_test(self, student_answer, state):
        return self.string_canonized in student_answer, ''

The __init__ and canonize_test methods are inherited from TestString:

class TestString(TestExpression):
    def __init__(self, string, canonize=True):
        if not isinstance(string, basestring):
            raise ValueError("Expect a string")
        self.string = string
        self.do_canonize = canonize
    def canonize_test(self, parser, state):
        if self.do_canonize:
            self.string_canonized = parser(self.string, state)
        else:
            self.string_canonized = self.string

The canonize_test is called when it is necessary to apply a canonization upon the test parameter, the parser function is the composition of all the upper canonizers. The canonization may depend on the session state, in this case, the canonize_test is called each time the student answers (see the HostReplace canonizer).

Canonizers

The canonizers apply a transformation on the student answer and/or on the children test parameters.

class UpperCase(TestUnary):
    def canonize(self, string, state):
        return string.upper()

The canonizers are automaticaly applied to student answers. They are applied on most tests parameters (Replace is an exception). In rare case a canonizer must apply canonize_test on the child expression (see the HostReplace canonizer). It is necessary if the canonization depends on the state value, for example the computer used by the student. To do so, the canonizer calls the initialize method with the current canonizer and the current state. self.parser contains the current upper parser at this point of the evaluation. All the children of the canonizer are in the table children, but every predefined canonizer only have one child.

    def canonize(self, string, state):
        self.children[0].initialize(
                lambda a_string, a_state:
                    your_canonizer(
                           self.parser(a_string, a_state),
                                  )
                    ),
                state)
        return your_canonizer(string)

Never forget that canonization must be done from top to bottom (root to leaves). It is not the same direction than for a normal expression evaluation.

If the canonizer works on a strict language it may detect syntax errors. In this case, the canonizer can shortcut the evaluation process and can return a comment.

    def canonize(self, string, state):
        try:
            return my_canonizer(string, state)
        except MyError:
            return False, "There is a syntax error..."

Plugin creation

Each GUI element and Quenlig functionnality is defined by a plugin. You can find the plugin names in the plugin reference page, but an interactive way is to take the teacher role and to click on 'Debug'. A '*' will be displayed next to each plugin, you can put the cursor over the star to see more information.

There is currently 69 plugins, their average size is about 40 lines. The mediane size is only 25 lines without the messages. It is really easy to create plugins, the most simple way is to start from a copy of simple existing plugin and modify it.

Currently 14 attributes may define the plugin behavior and appearence. There is 11 CSS attributes defining the graphical aspects and the translations. The plugin attributes are defined for each language.

Plugins/
   plugin_name/
      __init__.py      # To please Python
      plugin_name.py   # Language independent attributes
      en.py            # English specialisation
      fr.py            # French specialisation
      ...

The reference contains all the attributes and plugins with their documentation. Most of the plugins are really short piece of code easely understandable. There is some things needing some explanation.

What is triggering ?

When the web page is generated, all the allowed plugins are executed in order to create the page. But in some case, the page creation is triggered by the user action on a plugin. For example when the user answers the question, asks a tip, leaves a comment... In these cases, the triggered plugin will do its work to perform the required action and also generate its part of work to generate the web page.

What is displayed ?

If the plugin contains other plugins then it is a container box with all the contained plugin stacked verticaly is the horizontal attribute is not True. The box title is defined in the CSS or by the plugin evaluation if the content_is_title attribute is True.

If the plugin does not contain other plugin then the evaluation of the plugin returns the HTML code to display. If the link_to_self attribute is True then the plugin HTML is a link that trigger the plugin execution when followed.

In both case, the content or the box title may be tipped with the localized attribute tip. As all the texts in the interface, the tip in stored in the CSS and not in the HTML.

All the generated content is boxed in an HTML element whose class is the plugin name. When defining css_attributes, you start the selector relatively to the plugin element selector. If your selector is empty, the style applies to the plugin element. If you want to define the complete selector, start it with /

The order of the plugins in a container is defined by the priority_display attribute. The display position can be defined as an absolute number or relative to another plugin.

Finally, when a plugin is triggered it may generate some content for the core of the web page. It store this content in its heart_content attribute, the heart_content plugin retrieve this attribute to put it in good place at the core of the page.

Evaluation

The execute attribute is called to generate the web page if the user is allowed (or imposed) to use the plugin. The function most important parameter indicates if the plugin has been triggered by the user.

The order of execution may be different than the display order. For example, when the user answer correctly a question with the question_answer plugin, all the other plugin must take this into account when displaying the user interface. So the question_answer must be executed before the other, and its generated content must be stored until needed. The order of execution is defined with the attribute priority_execute the same way than the display order.

As the generated content is stored for each plugin, it is easy to create plugins that transform the generated content of other plugins. The page and debug plugins are such plugins. But there is a catch, as the plugin modifying the other must be evaluated last, 2 plugins can not mutualy modify themselves.

Plugin option

Each plugin can define a single session persistent option. The minimal option definition if for example:

option_name = 'help' # Command line option name
# Help message for the command line and session option web page
option_help = '''"true" or "false"
        Display HTML/help.html when starting a session.'''
# The default valeur for the option
option_default = "false"

The option is stored into plugin.option as a string.

It is possible to define an option parser to check it:

def option_set(plugin, value):
    plugin.option = int(value)

Developper tools

The developper tools are launchable using a Makefile goal. The parameters are defined in the Makefile.