Where to start

Once you have your new tests properly designed, you can start thinking about how to implement them in qmtest. To create new test, start the qmtest GUI, and choose New Test from File menu. This will show a test creation form with two fields: Test Name and Test Class.

Test Name is a string from one or more dot-separated names that maps to test suite and test name in qmtest test database. For example name:

isql.avg.avg_01

will create test avg_01 in test suite avg that's in test suite isql from qmtest's point of view. In filesystem, it will create file avg_01.qmt in directory avg.qms that's subdirectory of isql.qms in test database root directory. You can change this test "name" anytime later by moving and/or reniming the test file.

Test Class is a combobox that lists all test types / classes registered in qmtest. Each test class contains special support for some type of tests, for example command.ShellCommandTest is designed to create tests that run shell scripts and check their output. For Firebird QA needs, we created special test class fbqa.FirebirdTest, so choose this one.

Now you can push the Next button that will bring forward another form you need to fill with your test definition / implementation.

To streamline the implementation process, it's good to split it into "building blocks", and handle them in order. These building blocks are:
  1. Identification and Description.
  2. Setup/cleanup of running environment, i.e. database schema and content, tools etc.
  3. Test execution code. If test cases are well defined, then each has one and only one directly tested command. Its outcome is verified by expected output. If the direct output from tested command(s) is not enough to verify its correctness (some commands even don't produce any "visible" output), you must use additional means (check the content in system tables, check presence of file on disk etc.) (if any), and / or with additional checks (check for right content in system tables for example). This additional code should make some output (print data or exception message) that could be verified against expected output.
  4. Expected output from tested command(s) or additional checks.
Identification and Description

Each test implementation has next identification/description attributes:
  • Test ID
  • Author
  • Target Engine
  • Target Platform
  • Bug identifier
  • Title
  • Description
Test implementation may depend on version of Firebird engine and / or platform. We overcome this issue by creating multiple test implementations for single "logical" test. These implementations are separate qmtest test definitions, i.e. they are stored in separate test files with distinct filenames. What makes them to belong to single test is Test ID value, that must be the same for all implementations. To distinguish between these implementations, each test has also attributes Target Engine (that defines the least engine version number this implementation works with) and Target Platform (that defines one or more platforms separated by colon that this implementation works on).

Of course, qmtest doesn't understand this arrangement and see them as different tests (because in fact, they are different), so if you would run the test suite that contains all these platform/version dependent tests, all except one would probably fail, and thus screw up your test run results. You have to "extract" only those tests from Firebird test database, that are designed to run on your tested Firebird engine version and platform into separate, working qmtest test database. For this purpose (but not only for it), we created qadbm tool that lists (or copies to specified directory) only tests that match specified target engine version and platform.

When you're starting to implement new test, you don't need to worry about platform dependencies, because they could be easily handled later. Your tests thus may start with default value All for Target Platform attribute. You should only check if all requirements on running environment for test could be provided by all platforms (for example that external tools you want to use are available for all platforms supported by Firebird).

What is very important to decide right at the beginning is the target version of Firebird engine. Although you may want to start implementing your tests for most recent development or stable version, we'd like encourage you to install oldest engine that supports tested functionality, and start implementing your tests against this version (and fill the attribute Target engine with its version number). Firebird is developed to be backward and forward compatible, so many tests designed to work with older version would also work with any newer version without any change (but some may not, more about it later). Although you may think that Firebird users run only most recent stable version, so there is no need to run tests against older one, there are some strong arguments for this practice:
  • Your test written first against older engine may discover a regression, that slips unnoticed into newer engine version. You know, although our testing has big scale, some "dark corners" may get less exposure than others
  • Not all users run the most recent stable version, so we pay attention to older versions as well.
  • It supports the "defensive thinking" you'd need anyway to become a good developer and sucessful QA engineer :-)
Test IDs are hierarchical, dot-separated names, that must be otherwise unique in whole Firebird test database. You can get the list of IDs already used by other tests with qadbm tools, or you can take a look at our published list of tests in Firebird test database, but only qadbm will give you really up-to-date list of IDs.

For Author attribute, fill in your SourceForge user ID if you have one, or your nickname. This attribute can contain more names separated by colon.

If this test is a test case for particular bug logged in our issue trackers, fill in the ID assigned to this issue to Bug identifier attribute.

Title is a descriptiove name for test. This attribute is important, and should clearly but briefly state what functionality is tested. Because hierarchical Test ID defines test's category, so you don't need to include this information into Title. For example test basic.db.01 has title "Empty DB - RDB$DATABASE content" to indicate that this test checks the correct content of RDB$DATABASE system table in (otherwise) empty database.

Description is a free text attribute. Typically, it contains more verbose description what test does than Title, along with any information you may see as important for others to know (dependecies, known issues etc.). It's good to include the design specification in it, as it would help fix any issues with implementation that may occur in future. If this would make the description too long and clumsy, fill in only most important points from test specification, as details could be figured out from implementation when necessary.

Setup and Cleanup

Tests may need special environment to run. Some test may require the same running environment, but while some tests may share it, some may not, as it's necessary to disallow any unwanted interferences between tests that may screw up their results.

Almost all Firebird-related tests work with one Firebird database, so our Firebird test class has direct internal support to provide it.

Firebird test class has next attributes to setup/cleanup working database:
  • Database Creation Method. An enumeration field. The value of this field must be one of a preselected set of enumerals that are "Create New", "Connect To Existing", "Restore From Backup" and "None". The default value of this field is "Create New".
If "Create New" is chosen, kinterbasdb will be used to create a database with the given parameters. A connection to the database will then be made with kinterbasdb.

If "Connect to existing" is chosen, kinterbasdb will be used to connect to the given database using the given parameters.

If "Restore from Backup" is chosen, gbak will be used to restore a database with the given name from the given backup file.

If the database already exists, the test will raise an error. if "None" is chosen, then test hasn't any assumption about use of any database in test.
  • Database Path Property Name. The name of the context property which is set to the path to the database. The default value of this field is "database_location".
  • Database Name. This value is concatenated with the server and database locations from context file. The default value of this field is "database_name".
  • Path to Backup File. The backup file to be used (if database is to be restored from backup).
  • User Name. The user name to use to access to the database. If the database already exists, this will be assumed to be the username granting access to the database. If the database is being created, then this fields value will be set as the username. The default value of this field is "SYSDBA".
  • User Password. The password to use to access the database. If the database already exists, this will be assumed to be the password granting access to the database. If the database is being created, then this fields value will be set as the password. The default value of this field is "masterkey".
  • Character Set. Character set to use for database connection. An enumeration field. The value of this field must be one of a preselected set of enumerals. The default value of this field is "NONE".
  • Page Size. Page size for database (if database is being created). Defaults to Firebird Default. The value of this field must be one of a preselected set of enumerals.
  • SQL Dialect. The SQL dialect to use. The connect and/or create statements will be executed using this dialect. The value of this field must be one of a preselected set of enumerals. The default value of this field is "3".
  • Database Population Method. DEPRECATED. This field was used to determine how the database will be populated. The latest version of Firebird QA test class determines the method from presence of content in fields SQL Commands, Data Tuple and SQL Insert Statement, so ISQL and Python-based initialization methods are not mutually exclusive, and could be combined. This attribute is retained for backward compatibility with existing test.
  • SQL Commands. The SQL commands to use to populate the database. These commands will be executed using ISQL, in the context of the database associated with this test.
  • Data Tuple. Data tuple to populate database with. The data tuple given in this field will be used to provide the parameters to the SQL insert statement. This field needs to be a tuple of tuples or a list of lists. Example: ( ("Jane", 23), ("Sam", 56) ) or [ ["Sally", 21], ]
  • SQL Insert Statement. The parameterized SQL insert statement to use with the data tuple. The variable parameters given in this statement will be provided by the data tuple given above. This statement must be parameterized and include the same number of parameters as each tuple in the data tuple. Example: "insert into people values (?, ?)"
  • Drop Database?. This field determines whether or not the database will be dropped. The default value of this field is "true".
If this field is set to "true", then prior to exiting, the database will be removed. This is still applicable if the test fails or raises an exception during execution. If the database cannot be dropped, and the test has already failed or generated an error for some other reason, then the location of the database will be given as an annotation to the result. it is important the database is then removed manually as subsequent tests may fail if they attempt to create a database with the same name.

If your test would require more than on database, or other resources like Firebird users, temporary directories etc. you can use qmtest resources. These resources are similar to tests as they are defined as special extension classes for qmtest, but their purpose is to handle setup and cleanup of some resource for one or more tests. For more information about qmtest resources, check out the qmtest manual.

Test execution code

Firebird-specific test class supports two different methods that you can use to run commands against Firebird engine:
  • SQL script that's run by ISQL.
  • Arbitrary Python code or expression that may use KinterbasDB to interact with Firebird engine.
Test's behaviour is controlled by next attributes:
  • Test Statement Type and Expected Return Type. The test statement type (Python/SQL) and expected return type (boolean/string). The value of this field must be one of a preselected set of enumerals that are: "None: None", "Python: True", "Python: False", "Python: String" and "SQL: String". The default value of this field is "None: None".
if "Python: True" or "Python: False" is selected, python source code can be supplied to be executed prior to the evaluation of the test statement. Outside of catching thrown exceptions, no checking is performed on the return value(s) of the source code. Because of this, the test statement is what is being evaluated.

if "Python: String" is selected, the standard output stream (if any) generated by the python source code will be captured and compared against the given string. If a python test statement is given, then the python source code will be executed but it's output will be ignored, and the output of the test statement will be captured and compared against the given result string.
 
if "SQL: String" is selected, then the given SQL command(s) entered in the source code field will be executed using ISQL in the context of the database associated with this test, and the output (if any) compared with the given string(s).
  • Python/SQL Source Code. The SQL or python test statement(s) to be executed.
If "Python: True" or "Python: False" was selected as the type/expected return value of the test statement, then the contents of this field are optional, and will be executed before the test statement itself (which is required) is evaluated. In this case any values or output returned or generated by this code will be ignored (unless any exceptions are thrown). The active connection to the database associated with this test is available in the namespace of the source code as "db_conn". The test's context is available as "context" and kinterbasdb is also present as "kdb".

If "Python: String" was selected as the type/expected return value of the test statement, then any output generated by this code will be compared against the given string(s). The only exception to this rule is if a test statement is given below. In that case, this code will be executed but the output of the test statement (and not this source code) is what will be compared against the given result string. The active connection to the database associated with this test is available in the namespace of the source code as "db_conn". The test's context is available as "context" and kinterbasdb is also present as "kdb".

If "SQL: String" was selected as the type/expected return value of the test statement, then the contents of this field will be executed as ISQL script in the context of the database associated with this test.
  • Python Expression. The python statement to evaluate.
If "Python: True" or "Python:False" was selected as the type/expected return value of the test statement, then after the python source code is executed (if any was given), the value of this statement will be compared against the selected True/False value

If "Python: String" was selected then this field is optional. If it is given a value, then after the python source code is executed (if any was given), the standard output from this statement will be captured and compared against the given string

The active connection to the database associated with this test is available in the namespace of the source code as "db_conn". The test's context is available as "context" and kinterbasdb is also present as "kdb".

Both Python source code and ISQL script can refer to context variables in brackets preceding by dolar sign, that are substituted for their values before execution. For example $(database_location) is replaced by value of context variable database_location. Variable names are context sensitive!

Python source code can import and use any standard Python 2.3 library module, KinterbasDB and MX Base Extensions modules. Of course, you can also use any module that isn't included in standard Python 2.3 library, but in that case, you must declare that dependancy in test description and README file for Firebird test database!

Python source code is executed in it's own, "clean" environment, i.e. modules used by qmtest itself are not acessible without explicit import. But to make Python test development more easy, each Python tests has global namespace extended with several modules, routines and variables that could be directly used without explicit import:
  • context — QMTest Context variable passed to test (see qmtest documentation for details)
  • kdb — KinterbasDB Firebird access module.
  • printData — Special helper routine to print formatted data from open cursor to standard output.
  • getDatabaseInfo — Special helper function to retrieve database information. It's a wrapper for isc_database_info calls.
  • sys — sys module from standard Python library
  • dsn — Full (server, directory and database filename) name for working database defined for running test.
  • user_name — Value of User Name attribute from running test
  • user_password — Value of User Password attribute from running test
  • page_size — Value of Page Size attribute from running test
  • sql_dialect — Value of SQL Dialect attribute from running test
  • character_set — Value of Character Set attribute from running test
  • db_path_property — Value of Database Path Property Name attribute from running test
  • db_conn — KinterbasDB Connection object connected to working database (if it was requested in test definition)
We may add more objects "injected" into test's sandbox over time (see README.Python file in Firebird test database for actual list of objects).
NOTE: Don't hesitate to ask us about addition of functions, objects or modules you need or use often in your tests!
Expected output

This part of your test defines expected output from executed test code. QMTest compares this definition with actual standard and error output from test run to specify its outcome (PASS or FAIL). So if your code doesn't produce any output, there is nothing to test, and you must redefine your test. There is only one exception to this rule, when test is defined as "Python:True" or "Python:False" type. In this case, test's outcome is defined by match of python expression to specified value.

Expected output is also the most volatile part of any test, because while setup/cleanup and test code probably do not change between Firebird versions or platforms, the actual output may change as Firebird evolve (for example error messages are enhanced to be more precise and meaningful).

Run verification part of test consists from next test attributes:
  • Expected Result String. The expected result string for the test statement(s) (if not a boolean).
If "SQL: String" or "Python: String" was selected as the type/expected return value of the test statement, then the output generated by Python or SQL will be compared against the text given here.

If any regular expression substitutions are provided (see below), they will be applied to both the expected and actual outputs of the SQL/python expressions. If any differences exist between the expected/actual outputs, then a diff will be provided as an annotation to the test results.

The text is stored verbatim; whitespace and indentation are preserved. The default value of this field is "".
  • Expected stderr. The expected Standard Error output for the test statement(s) (if not a boolean).
If "SQL: String" was selected as the type/expected return value of the test statement, then the error output generated by SQL will be compared against the text given here.

If any regular expression substitutions are provided (see below), they will be applied to both the expected and actual error outputs of the SQL expressions. If any differences exist between the expected/actual outputs, then a diff will be provided as an annotation to the test results.

The text is stored verbatim; whitespace and indentation are preserved. The default value of this field is "".
  • Substitutions. Regular expression substitutions. Each substitution will be applied to both the expected and actual stdout and stderr of the expressions. The comparison will be performed after the substitutions have been performed.
You can use substitutions to ignore insignificant differences between the expected and actual outputs.

This attribute is a set field. A set contains zero or more elements, all of the same type. The elements of the set are described below:

A substitution consists of a regular expression pattern and a substitution string. When the substitution is applied, all subtrings matching the pattern are replaced with the substitution string. The substitution string may reference matched groups in the pattern.

The regular expression and substitution syntax are those of Python's standard "'re' regular expression module".

For example next subsitution strips out all lines starting with "Transaction -". It's used to cut out insignificant / volatile parts of SHOW DATABASE ISQL command:

Pattern: \nTransaction -.*
Replacement:

Dealing with engine version and platform dependencies

Once you have created (and verified) your new test for single engine version on your favourite platform, it's time to check it on newer engine versions and other platforms whether it still works as expected and with the same outcome. Of course, it may happen that you don't have more Firebird versions or platforms, so you may skip this check, but in this case you should clearly state in test's Description on which version/platform it was created and that it's not verified on other versions/platforms. Other QA engineers then may quickly discover and solve issues in other environments when they run to them.

When you find out that any test doesn't work as excepted on some platform or engine version, you have to create new test version. For well-designed tests, it's mostly simple and straightforward routine:
  1. Start the QMTest GUI and run test that fails. You'll get nice annotation that should contain all information you would need to fix it for new platform/engine version.
  2. Create a copy of broken test under different filename, but in the same suite/subdirectory in Firebird test database. Open it in QMTest GUI.
  3. When test fails on different platform but the same engine version, change values for Target Platform in original and new copy to completive values. Most platform differences are between POSIX platforms as a whole and Windows, so in most cases you can create only two platform version: one for Windows, and second for remaind platforms (value "Linux: Solaris:HP-UX:FreeBSD:Darwin:Sinix-Z")
When test fails on different engine version, update the value of Target Engine in test copy to engine version where first one failed.
  1. Analyze annotations from failed test run.
If test outcome is UNTESTED, then problem is in resource initialization. It could be quite simple to solve (path or filename changed) or an issue in resource class itself (in that case, contact our QA team in Firebird-test mailing list).

If test outcome is ERROR, them problem is in test initialization or execution. Like with resources, it could be simple or more difficult to fix, but in this time the problem is in code defined in test, or in test class itself.

If test outcome is FAIL, them problem is that result from test run is now different than specified expected standar or error output. It should be very easy to fix.
  1. When new test version works (its outcome is PASS), don't forget to add it to Firebird test database in CVS (or send it to us if you don't have write access to our CVS).
Test implementation examples

Next test definitions / implementation are real tests from Firebird test database.

1. Simple ISQL-based test:

Test ID:         basic.db.01
Author:          pcisar
Target Engine:   1.0
Target Platform: All
Bug identifier:
Title:           Empty DB - RDB$DATABASE content

Description:     Check the correct content of RDB$DATABASE for freh, empty database.

Database Creation Method:    Create New
Database Path Property Name: database_location
Database Name:               basic_test
Path to Backup File:
User Name:                   SYSDBA
User Password:               masterkey
Character Set:               NONE
Page Size:                   Default
SQL Dialect:                 3
SQL Commands:
Data Tuple:
SQL Insert Statement:

Test Statement Type and Expected Return Type:   SQL: String

Python/SQL Source Code:
select * from RDB$DATABASE;

Python Expression:

Expected Result String:
RDB$DESCRIPTION   RDB$RELATION_ID RDB$SECURITY_CLASS              RDB$CHARACTER_SET_NAME
================= =============== =============================== ===============================

                        128                          

Expected stderr:

Substitutions:      None
Drop Database?:     true
Prerequisite Tests: None
Resources:          None


2. Test written in Python, with substitutions

Test ID:         database.alter.01
Author:          pcisar:sskopalik
Target Engine:   1.0
Target Platform: All
Bug identifier:
Title:           ALTER DATABASE ADD FILE
Description:     Adding a secondary file to the database

Database Creation Method:    Create New
Database Path Property Name: database_location
Database Name:               database_test.fdb
Path to Backup File:
User Name:                   SYSDBA
User Password:               masterkey
Character Set:               NONE
Page Size:                   Default
SQL Dialect:                 3
SQL Commands:
Data Tuple:
SQL Insert Statement:

Test Statement Type and Expected Return Type:   Python: String

Python/SQL Source Code:
cursor=db_conn.cursor()
cursor.execute("ALTER DATABASE ADD FILE '$(DATABASE_LOCATION)TEST.G00' STARTING 10000")
db_conn.commit()
cursor.execute("SELECT cast(RDB$FILE_NAME as varchar(50)), \
  RDB$FILE_SEQUENCE,RDB$FILE_START,RDB$FILE_LENGTH FROM RDB$FILES")
printData(cursor)

Python Expression:

Expected Result String:
CAST                                             RDB$FILE_SEQUENCE RDB$FILE_START RDB$FILE_LENGTH
------------------------------------------------ ----------------- -------------- ---------------
TEST.G00                                1                 10000         0

Expected stderr:

Substitutions:
Pattern: ^.*TEST.G
Replacement: TEST.G
Pattern: [ ]+
Replacement: \t

Drop Database?:     true
Prerequisite Tests: None
Resources:          None


3. Test with database initialization

Test ID:         intfunc.count.02
Author:          pcisar:sskopalik
Target Engine:   1.0
Target Platform: All
Bug identifier:
Title:           COUNT
Description:
Count of Not Null values and count of rows and count of distinct values

Dependencies:
CREATE DATABASE
CREATE TABLE
INSERT
Basic SELECT

Database Creation Method:    Create New
Database Path Property Name: database_location
Database Name:               count_test.fdb
Path to Backup File:
User Name:                   SYSDBA
User Password:               masterkey
Character Set:               NONE
Page Size:                   Default
SQL Dialect:                 3

SQL Commands:
CREATE TABLE test( id INTEGER);
INSERT INTO test VALUES(0);
INSERT INTO test VALUES(0);
INSERT INTO test VALUES(null);
INSERT INTO test VALUES(null);
INSERT INTO test VALUES(null);
INSERT INTO test VALUES(1);
INSERT INTO test VALUES(1);
INSERT INTO test VALUES(1);
INSERT INTO test VALUES(1);

Data Tuple:
SQL Insert Statement:

Test Statement Type and Expected Return Type:   SQL: String

Python/SQL Source Code:
SELECT COUNT(*), COUNT(ID), COUNT(DISTINCT ID) FROM test;

Python Expression:


Expected Result String:
       COUNT        COUNT        COUNT
============ ============ ============
           9            6            2

Expected stderr:

Substitutions:      None
Drop Database?:     true
Prerequisite Tests: None
Resources:          None


Debugging Python-based test

Despite of Golden rule for test case design, it's possible that some tests written in Python would need complex computations, and although Python is very clean language, you may need to trace and debug them. Without special support it's not an easy task to debug Python code executed from QMTest, because Python code used in tests is executed via exec or eval functions. To mend it, we integrated direct support for pdb (interactive Python source code debugger) into our test class.

The integrated debugger is disabled by default. It's activated by existence of debug variable in QMTest context, so you can use special "debug" context file that contains this variable, or specify it on command line with -c options. For example:

qmtest run -c debug=1 mytest

The value of debug variable is not important, as debugger is activated by pure existence of the variable, but QMTest requires that context variables have values, so you need to provide one.

Once the integrated debugger is activated, all Python code in tests you would run in QMTest are automatically executed in it, i.e. the test execution is stopped and pdg console is displayed. You can use all pdb commands to inspect and trace the source code. Firebird test class also makes a copy of source code in temporary file for you, so the list pdb command will show you traced source code.

The pdb debugger is CLI-based, so there are two important things you must take into account when working with it:
  1. Although you can debug tests from QMTest GUI (the pdb console is available from terminal window from where you run QMTest), it's more suitable to use QMTest run command for debugging.
  2. Because pdb needs access to standard output and input, it's not possible to redirect them while test is in debug mode, so all tests that compare actual standard output with expected one will end with FAILED outcome.
Using UNICODE in tests

Some test may require use of international characters. To support those tests, the FirebirdTest test class supports UNICODE characters encoded as UTF-8 in next test attributes:
  • Title
  • Description
  • Data Tuple
  • Python/SQL Source Code
  • Python Expression
  • Expected Result String
  • Expected stderr
  • Substitutions
Unfortunatelly, using UNICODE characters is still not as easy and straightforward as it should be, and there are some limitations and pitfalls you have to take into account when you implement or work with tests that use UNICODE characters.
  1. First, you will need to use Python version that supports it. If you have Python v2.3 or greater, you should not have any trouble.
  2. You should not experience any problems displaying UTF-8 test attribute values in QMTest GUI, if your browser can handle them and you have appropriate font(s) installed. But you may experience UNICODE rendering problems when using QMTest CLI interface, as it needs UTF-8 support in your terminal. If UNICODE characters are not displayed correctly, consult your OS/terminal documentation for instructions how to enable UTF-8 support.
  3. Use QMTest GUI to inspect and edit tests with UNICODE characters. You can also use any text or XML editor with UTF-8 support, but if you do, you should allways check your changes with QMTest GUI to verify that QMTest can handle them correctly.
  4. You cannot use UNICODE characters in ISQL scripts. Right now, UNICODE is available only for Python source code.
  5. You must specify UNICODE-FSS character set for database connection.
Sample test with UNICODE:

Title:              PXW_CSY (Czech) sort test

Dependencies:
CREATE DATABASE
CREATE TABLE
INSERT
Basic SELECT, ORDER BY

Database Creation Method:    Create New
Database Path Property Name: database_location
Database Name:               sort_test.fdb
Path to Backup File:
User Name:                   SYSDBA
User Password:               masterkey
Character Set:               UNICODE_FSS
Page Size:                   Default
SQL Dialect:                 3

SQL Commands:
CREATE TABLE test (C1 VARCHAR(50) CHARACTER SET WIN1250 COLLATE PXW_CSY);

Data Tuple:
(('a',),('aaa',),('abc',),('aba',),('áaa',),('aáa',),('ÁAa',),
('ÁÁA',),('b',),('c',),('č',),('d',),('ď',),('e',),('f',),('g',),
('h',),('ch',),('i',),('í',),('j',),('k',),('l',),('m',),('n',),
('o',),('p',),('ó',),('ĺ',),('ň',),('q',),('r',),('ř',),('s',),
('š',),('t',),('ť',),('u',),('ú',),('ů',),('v',),('w',),('x',),
('y',),('ý',),('z',),('ž',),('Á',),('B',),('C',),('Č',),('D',),
('Ď',),('E',),('É',),('Ě',),('ě',),('F',),('G',),('H',),('CH',),
('I',),('Í',),('J',),('K',),('L',),('Ĺ',),('M',),('N',),('Ň',),
('O',),('Ó',),('P',),('Q',),('R',),('Ř',),('S',),('Š',),('T',),
('Ť',),('U',),('Ú',),('Ů',),('V',),('W',),('X',),('Y',),('Ý',),
('Z',),('Ž',),('é',),('A',),('á',))

SQL Insert Statement:
insert into test values (?)

Test Statement Type and Expected Return Type:   Python: String

Python/SQL Source Code:
cursor = db_conn.cursor()
cursor.execute('SELECT C1 FROM TEST ORDER BY C1')
printData(cursor)

Expected Result String:
C1
--------------------------------------------------
a
A
á
Á
aaa
aáa
áaa
ÁAa
ÁÁA
aba
abc
...snip...
ť
Ť
u
U
ú
Ú
ů
Ů
v
V
w
W
x
X
y
Y
ý
Ý
z
Z
ž
Ž
Expected stderr:

Substitutions:      None
Drop Database?:     true
Prerequisite Tests: None
Resources:          None