This file is indexed.

/usr/share/doc/python3-autopilot/json/_sources/guides/good_tests.txt is in python3-autopilot 1.5.1+16.04.20160412-0ubuntu1.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
Writing Good Autopilot Tests
============================

This document is an introduction to writing good autopilot tests. This should be treated as additional material on top of all the things you'd normally do to write good code. Put another way: test code follows all the same rules as production code - it must follow the coding standards, and be of a professional quality.

Several points in this document are written with respect to the unity autopilot test suite. This is incidental, and doesn't mean that these points do not apply to other test suites!

.. _write-expressive-tests:

Write Expressive Tests
++++++++++++++++++++++

Unit tests are often used as a reference for how your public API should be used. Functional (Autopilot) tests are no different: they can be used to figure out how your application should work from a functional standpoint. However, this only works if your tests are written in a clear, concise, and most importantly expressive style. There are many things you can do to make your tests easier to read:

**Pick Good Test Case Class Names**

Pick a name that encapsulates all the tests in the class, but is as specific as possible. If necessary, break your tests into several classes, so your class names can be more specific. This is important because when a test fails, the test id is the primary means of identifying the failure. The more descriptive the test id is, the easier it is to find the fault and fix the test.

**Pick Good Test Case Method Names**

Similar to picking good test case class names, picking good method names makes your test id more descriptive. We recommend writing very long test method names, for example:

.. code-block:: python

    # bad example:
    def test_save_dialog(self):
        # test goes here

    # better example:
    def test_save_dialog_can_cancel(self):
        # test goes here

    # best example:
    def test_save_dialog_cancels_on_escape_key(self):
        # test goes here

**Write Docstrings**

You should write docstrings for your tests. Often the test method is enough to describe what the test does, but an English description is still useful when reading the test code. For example:

.. code-block:: python

    def test_save_dialog_cancels_on_escape_key(self):
        """The Save dialog box must cancel when the escape key is pressed."""

We recommend following :pep:`257` when writing all docstrings.


Test One Thing Only
+++++++++++++++++++

Tests should test one thing, and one thing only. Since we're not writing unit tests, it's fine to have more than one assert statement in a test, but the test should test one feature only. How do you tell if you're testing more than one thing? There's two primary ways:

 1. Can you describe the test in a single sentence without using words like 'and', 'also', etc? If not, you should consider splitting your tests into multiple smaller tests.

 2. Tests usually follow a simple pattern:

  a. Set up the test environment.
  b. Perform some action.
  c. Test things with assert statements.

  If you feel you're repeating steps 'b' and 'c' you're likely testing more than one thing, and should consider splitting your tests up.

**Good Example:**

.. code-block:: python

    def test_alt_f4_close_dash(self):
        """Dash must close on alt+F4."""
        self.dash.ensure_visible()
        self.keyboard.press_and_release("Alt+F4")
        self.assertThat(self.dash.visible, Eventually(Equals(False)))

This test tests one thing only. Its three lines match perfectly with the typical three stages of a test (see above), and it only tests for things that it's supposed to. Remember that it's fine to assume that other parts of unity work as expected, as long as they're covered by an autopilot test somewhere else - that's why we don't need to verify that the dash really did open when we called ``self.dash.ensure_visible()``.

Fail Well
+++++++++

Make sure your tests test what they're supposed to. It's very easy to write a test that passes. It's much more difficult to write a test that only passes when the feature it's testing is working correctly, and fails otherwise. There are two main ways to achieve this:

* Write the test first. This is easy to do if you're trying to fix a bug in Unity. In fact, having a test that's exploitable via an autopilot test will help you fix the bug as well. Once you think you have fixed the bug, make sure the autopilot test you wrote now passed. The general workflow will be:

 0. Branch unity trunk.
 1. Write autopilot test that reproduces the bug.
 2. Commit.
 3. Write code that fixes the bug.
 4. Verify that the test now passes.
 5. Commit. Push. Merge.
 6. Celebrate!

* If you're writing tests for a bug-fix that's already been written but is waiting on tests before it can be merged, the workflow is similar but slightly different:

 0. Branch unity trunk.
 1. Write autopilot test that reproduces the bug.
 2. Commit.
 3. Merge code that supposedly fixes the bug.
 4. Verify that the test now passes.
 5. Commit. Push. Superseed original merge proposal with your branch.
 6. Celebrate!

Think about design
++++++++++++++++++
Much in the same way you might choose a functional or objective-oriented paradigm for a piece of code, a testsuite can benefit from choosing a good design pattern. One such design pattern is the page object model. The page object model can reduce testcase complexity and allow the testcase to grow and easily adapt to changes within the underlying application. Check out :ref:`page_object_guide`.

Test Length
+++++++++++

Tests should be short - as short as possible while maintaining readability. Longer tests are harder to read, harder to understand, and harder to debug. Long tests are often symptomatic of several possible problems:

 * Your test requires complicated setup that should be encapsulated in a method or function.
 * Your test is actually several tests all jammed into one large test.

**Bad Example:**

.. code-block:: python

    def test_panel_title_switching_active_window(self):
        """Tests the title shown in the panel with a maximized application."""
        # Locked Launchers on all monitors
        self.set_unity_option('num_launchers', 0)
        self.set_unity_option('launcher_hide_mode', 0)

        text_win = self.open_new_application_window("Text Editor", maximized=True)

        self.assertTrue(text_win.is_maximized)
        self.assertThat(self.panel.title, Equals(text_win.title))
        sleep(.25)

        calc_win = self.open_new_application_window("Calculator")
        self.assertThat(self.panel.title, Equals(calc_win.application.name))

        icon = self.launcher.model.get_icon_by_desktop_id(text_win.application.desktop_file)
        launcher = self.launcher.get_launcher_for_monitor(self.panel_monitor)
        launcher.click_launcher_icon(icon)

        self.assertTrue(text_win.is_focused)
        self.assertThat(self.panel.title, Equals(text_win.title))

This test can be simplified into the following:

.. code-block:: python

    def test_panel_title_switching_active_window(self):
        """Tests the title shown in the panel with a maximized application."""
        text_win = self.open_new_application_window("Text Editor", maximized=True)
        self.open_new_application_window("Calculator")

        icon = self.launcher.model.get_icon_by_desktop_id(text_win.application.desktop_file)
        launcher = self.launcher.get_launcher_for_monitor(self.panel_monitor)
        launcher.click_launcher_icon(icon)

        self.assertTrue(text_win.is_focused)
        self.assertThat(self.panel.title, Equals(text_win.title))

Here's what we changed:

 * Removed the ``set_unity_option`` lines, as they didn't affect the test results at all.
 * Removed assertions that were duplicated from other tests. For example, there's already an autopilot test that ensures that new applications have their title displayed on the panel.

With a bit of refactoring, this test could be even smaller (the launcher proxy classes could have a method to click an icon given a desktop id), but this is now perfectly readable and understandable within a few seconds of reading.

Good docstrings
+++++++++++++++

Test docstrings are used to communicate to other developers what the test is supposed to be testing. Test Docstrings must:

 1. Conform to `PEP8 <http://www.python.org/dev/peps/pep-0008/>`_ and `PEP257 <http://www.python.org/dev/peps/pep-0257/>`_ guidelines.
 2. Avoid words like "should" in favor of stronger words like "must".
 3. Contain a one-line summary of the test.

Additionally, they should:
 1. Include the launchpad bug number (if applicable).

**Good Example:**

.. code-block:: python

    def test_launcher_switcher_next_keeps_shortcuts(self):
        """Launcher switcher next action must keep shortcuts after they've been shown."""

Within the context of the test case, the docstring is able to explain exactly what the test does, without any ambiguity. In contrast, here's a poorer example:

**Bad Example:**

.. code-block:: python

    def test_switcher_all_mode_shows_all_apps(self):
        """Test switcher 'show_all' mode shows apps from all workspaces."""

The docstring explains what the desired outcome is, but without how we're testing it. This style of sentence assumes test success, which is not what we want! A better version of this code might look like this:

.. code-block:: python

    def test_switcher_all_mode_shows_all_apps(self):
        """Switcher 'show all' mode must show apps from all workspaces."""

The difference between these two are subtle, but important.

Test Readability
++++++++++++++++

The most important attribute for a test is that it is correct - it must test what's it's supposed to test. The second most important attribute is that it is readable. Tests should be able to be examined by themselves by someone other than the test author without any undue hardship. There are several things you can do to improve test readability:

1. Don't abuse the ``setUp()`` method. It's tempting to put code that's common to every test in a class into the ``setUp`` method, but it leads to tests that are not readable by themselves. For example, this test uses the ``setUp`` method to start the launcher switcher, and ``tearDown`` to cancel it:

 **Bad Example:**

 .. code-block:: python

     def test_launcher_switcher_next(self):
        """Moving to the next launcher item while switcher is activated must work."""
        self.launcher_instance.switcher_next()
        self.assertThat(self.launcher.key_nav_selection, Eventually(GreaterThan(0)))

 This leads to a shorter test (which we've already said is a good thing), but the test itself is incomplete. Without scrolling up to the ``setUp`` and ``tearDown`` methods, it's hard to tell how the launcher switcher is started. The situation gets even worse when test classes derive from each other, since the code that starts the launcher switcher may not even be in the same class!

 A much better solution in this example is to initiate the switcher explicitly, and use ``addCleanup()`` to cancel it when the test ends, like this:

 **Good Example:**

 .. code-block:: python

     def test_launcher_switcher_next(self):
        """Moving to the next launcher item while switcher is activated must work."""
        self.launcher_instance.switcher_start()
        self.addCleanup(self.launcher_instance.switcher_cancel)

        self.launcher_instance.switcher_next()
        self.assertThat(self.launcher.key_nav_selection, Eventually(GreaterThan(0)))

 The code is longer, but it's still very readable. It also follows the setup/action/test convention discussed above.

 Appropriate uses of the ``setUp()`` method include:

 * Initialising test class member variables.
 * Setting unity options that are required for the test. For example, many of the switcher autopilot tests set a unity option to prevent the switcher going into details mode after a timeout. This isn't part of the test, but makes the test easier to write.
 * Setting unity log levels. The unity log is captured after each test. Some tests may adjust the verbosity of different parts of the Unity logging tree.

2. Put common setup code into well-named methods. If the "setup" phase of a test is more than a few lines long, it makes sense to put this code into it's own method. Pay particular attention to the name of the method you use. You need to make sure that the method name is explicit enough to keep the test readable. Here's an example of a test that doesn't do this:

 **Bad Example:**

 .. code-block:: python

    def test_showdesktop_hides_apps(self):
        """Show Desktop keyboard shortcut must hide applications."""
        self.start_app('Character Map', locale='C')
        self.start_app('Calculator', locale='C')
        self.start_app('Text Editor', locale='C')

        # show desktop, verify all windows are hidden:
        self.keybinding("window/show_desktop")
        self.addCleanup(self.keybinding, "window/show_desktop")

        open_wins = self.bamf.get_open_windows()
        for win in open_wins:
            self.assertTrue(win.is_hidden)

 In contrast, we can refactor the test to look a lot nicer:

 **Good Example:**

 .. code-block:: python

    def test_showdesktop_hides_apps(self):
        """Show Desktop keyboard shortcut must hide applications."""
        self.launch_test_apps()

        # show desktop, verify all windows are hidden:
        self.keybinding("window/show_desktop")
        self.addCleanup(self.keybinding, "window/show_desktop")

        open_wins = self.bamf.get_open_windows()
        for win in open_wins:
            self.assertTrue(win.is_hidden)

 The test is now shorter, and the ``launch_test_apps`` method can be re-used elsewhere. Importantly - even though I've hidden the implementation of the ``launch_test_apps`` method, the test still makes sense.

3. Hide complicated assertions behind custom ``assertXXX`` methods or custom matchers. If you find that you frequently need to use a complicated assertion pattern, it may make sense to either:

 * Write a custom matcher. As long as you follow the protocol laid down by the ``testtools.matchers.Matcher`` class, you can use a hand-written Matcher just like you would use an ordinary one. Matchers should be written in the ``autopilot.matchers`` module if they're likely to be reusable outside of a single test, or as local classes if they're specific to one test.

 * Write custom assertion methods. For example:

  .. code-block:: python

    def test_multi_key_copyright(self):
        """Pressing the sequences 'Multi_key' + 'c' + 'o' must produce '©'."""
        self.dash.reveal_application_lens()
        self.keyboard.press_and_release('Multi_key')
        self.keyboard.type("oc")
        self.assertSearchText("©")

  This test uses a custom method named ``assertSearchText`` that hides the complexity involved in getting the dash search text and comparing it to the given parameter.

Prefer ``wait_for`` and ``Eventually`` to ``sleep``
++++++++++++++++++++++++++++++++++++++++++++++++++++

Early autopilot tests relied on extensive use of the python ``sleep`` call to halt tests long enough for unity to change its state before the test continued. Previously, an autopilot test might have looked like this:

**Bad Example:**

.. code-block:: python

    def test_alt_f4_close_dash(self):
        """Dash must close on alt+F4."""
        self.dash.ensure_visible()
        sleep(2)
        self.keyboard.press_and_release("Alt+F4")
        sleep(2)
        self.assertThat(self.dash.visible, Equals(False))

This test uses two ``sleep`` calls. The first makes sure the dash has had time to open before the test continues, and the second makes sure that the dash has had time to respond to our key presses before we start testing things.

There are several issues with this approach:
 1. On slow machines (like a jenkins instance running on a virtual machine), we may not be sleeping long enough. This can lead to tests failing on jenkins that pass on developers machines.
 2. On fast machines, we may be sleeping too long. This won't cause the test to fail, but it does make running the test suite longer than it has to be.

There are two solutions to this problem:

In Tests
--------

Tests should use the ``Eventually`` matcher. This can be imported as follows:

.. code-block:: python

 from autopilot.matchers import Eventually

The ``Eventually`` matcher works on all attributes in a proxy class that derives from ``UnityIntrospectableObject`` (at the time of writing that is almost all the autopilot unity proxy classes).

The ``Eventually`` matcher takes a single argument, which is another testtools matcher instance. For example, the bad assertion from the example above could be rewritten like so:

.. code-block:: python

 self.assertThat(self.dash.visible, Eventually(Equals(False)))

Since we can use any testtools matcher, we can also write code like this:

.. code-block:: python

 self.assertThat(self.launcher.key_nav_selection, Eventually(GreaterThan(prev_icon)))

Note that you can pass any object that follows the testtools matcher protocol (so you can write your own matchers, if you like).

.. _wait_for:

In Proxy Classes
----------------

Proxy classes are not test cases, and do not have access to the ``self.assertThat`` method. However, we want proxy class methods to block until unity has had time to process the commands given. For example, the ``ensure_visible`` method on the Dash controller should block until the dash really is visible.

To achieve this goal, all attributes on unity proxy classes have been patched with a ``wait_for`` method that takes a testtools matcher (just like ``Eventually`` - in fact, the ``Eventually`` matcher just calls wait_for under the hood). For example, previously the ``ensure_visible`` method on the Dash controller might have looked like this:

**Bad Example:**

.. code-block:: python

    def ensure_visible(self):
        """Ensures the dash is visible."""
        if not self.visible:
            self.toggle_reveal()
            sleep(2)

In this example we're assuming that two seconds is long enough for the dash to open. To use the ``wait_for`` feature, the code looks like this:

**Good Example:**

.. code-block:: python

    def ensure_visible(self):
        """Ensures the dash is visible."""
        if not self.visible:
            self.toggle_reveal()
            self.visible.wait_for(True)

Note that wait_for assumes you want to use the ``Equals`` matcher if you don't specify one. Here's another example where we're using it with a testtools matcher:

.. code-block:: python

    key_nav_selection.wait_for(NotEquals(old_selection))


Scenarios
+++++++++

Autopilot uses the ``python-testscenarios`` package to run a test multiple times in different scenarios. A good example of scenarios in use is the launcher keyboard navigation tests: each test is run once with the launcher hide mode set to 'always show launcher', and again with it set to 'autohide launcher'. This allows test authors to write their test once and have it execute in multiple environments.

In order to use test scenarios, the test author must create a list of scenarios and assign them to the test case's ``scenarios`` *class* attribute. The autopilot ibus test case classes use scenarios in a very simple fashion:

**Good Example:**

.. code-block:: python

    class IBusTestsPinyin(IBusTests):
        """Tests for the Pinyin(Chinese) input engine."""

        scenarios = [
            ('basic', {'input': 'abc1', 'result': u'\u963f\u5e03\u4ece'}),
            ('photo', {'input': 'zhaopian ', 'result': u'\u7167\u7247'}),
            ('internet', {'input': 'hulianwang ', 'result': u'\u4e92\u8054\u7f51'}),
            ('disk', {'input': 'cipan ', 'result': u'\u78c1\u76d8'}),
            ('disk_management', {'input': 'cipan guanli ', 'result': u'\u78c1\u76d8\u7ba1\u7406'}),
        ]

        def test_simple_input_dash(self):
            self.dash.ensure_visible()
            self.addCleanup(self.dash.ensure_hidden)
            self.activate_ibus(self.dash.searchbar)
            self.keyboard.type(self.input)
            self.deactivate_ibus(self.dash.searchbar)
            self.assertThat(self.dash.search_string, Eventually(Equals(self.result)))

This is a simplified version of the IBus tests. In this case, the ``test_simple_input_dash`` test will be called 5 times. Each time, the ``self.input`` and ``self.result`` attribute will be set to the values in the scenario list. The first part of the scenario tuple is the scenario name - this is appended to the test id, and can be whatever you want.

.. Important::
   It is important to notice that the test does not change its behavior depending on the scenario it is run under. Exactly the same steps are taken - the only difference in this case is what gets typed on the keyboard, and what result is expected.

Scenarios are applied before the test's ``setUp`` or ``tearDown`` methods are called, so it's safe (and indeed encouraged) to set up the test environment based on these attributes. For example, you may wish to set certain unity options for the duration of the test based on a scenario parameter.

Multiplying Scenarios
---------------------

Scenarios are very helpful, but only represent a single-dimension of parameters. For example, consider the launcher keyboard navigation tests. We may want several different scenarios to come into play:

 1. A scenario that controls whether the launcher is set to 'autohide' or 'always visible'.
 2. A scenario that controls which monitor the test is run on (in case we have multiple monitors configured).

We can generate two separate scenario lists to represent these two scenario axis, and then produce the dot-product of thw two lists like this:

.. code-block:: python

    from autopilot.tests import multiply_scenarios

    class LauncherKeynavTests(AutopilotTestCase):

        hide_mode_scenarios = [
            ('autohide', {'hide_mode': 1}),
            ('neverhide', {'hide_mode': 0}),
        ]

        monitor_scenarios = [
            ('monitor_0', {'launcher_monitor': 0}),
            ('monitor_1', {'launcher_monitor': 1}),
        ]

        scenarios = multiply_scenarios(hide_mode_scenarios, monitor_scenarios)

(please ignore the fact that we're assuming that we always have two monitors!)

In the test classes ``setUp`` method, we can then set the appropriate unity option and make sure we're using the correct launcher:

.. code-block:: python

    def setUp(self):
        self.set_unity_option('launcher_hide_mode', self.hide_mode)
        self.launcher_instance = self.launcher.get_launcher_for_monitor(self.launcher_monitor)

Which allows us to write tests that work automatically in all the scenarios:

.. code-block:: python

    def test_keynav_initiates(self):
        """Launcher must start keyboard navigation mode."""
        self.launcher.keynav_start()
        self.assertThat(self.launcher.kaynav_mode, Eventually(Equals(True)))

This works fine. So far we've not done anything to cause undue pain.... until we decide that we want to extend the scenarios with an additional axis:

.. code-block:: python

    from autopilot.tests import multiply_scenarios

    class LauncherKeynavTests(AutopilotTestCase):

        hide_mode_scenarios = [
            ('autohide', {'hide_mode': 1}),
            ('neverhide', {'hide_mode': 0}),
        ]

        monitor_scenarios = [
            ('monitor_0', {'launcher_monitor': 0}),
            ('monitor_1', {'launcher_monitor': 1}),
        ]

        launcher_monitor_scenarios = [
            ('launcher on all monitors', {'monitor_mode': 0}),
            ('launcher on primary monitor only', {'monitor_mode': 1}),
        ]

        scenarios = multiply_scenarios(hide_mode_scenarios, monitor_scenarios, launcher_monitor_scenarios)

Now we have a problem: Some of the generated scenarios won't make any sense. For example, one such scenario will be ``(autohide, monitor_1, launcher on primary monitor only)``. If monitor 0 is the primary monitor, this will leave us running launcher tests on a monitor that doesn't contain a launcher!

There are two ways to get around this problem, and they both lead to terrible tests:

 1. Detect these situations and skip the test. This is bad for several reasons - first, skipped tests should be viewed with the same level of suspicion as commented out code. Test skips should only be used in exceptional circumstances. A test skip in the test results is just as serious as a test failure.

 2. Detect the situation in the test, and run different code using an if statement. For example, we might decode to do this:

 .. code-block:: python

     def test_something(self):
         # ... setup code here ...
         if self.monitor_mode == 1 and self.launcher_monitor == 1:
             # test something else
         else:
             # test the original thing.

 As a general rule, tests shouldn't have assert statements inside an if statement unless there's a very good reason for doing so.

Scenarios can be useful, but we must be careful not to abuse them. It is far better to spend more time typing and end up with clear, readable tests than it is to end up with fewer, less readable tests. Like all code, tests are read far more often than they're written.

.. _object_ordering:

Do Not Depend on Object Ordering
++++++++++++++++++++++++++++++++

Calls such as :meth:`~autopilot.introspection.ProxyBase.select_many` return several objects at once. These objects are explicitly unordered, and test authors must take care not to make assumptions about their order.

**Bad Example:**

.. code-block:: python

    buttons = self.select_many('Button')
    save_button = buttons[0]
    print_button = buttons[1]

This code may work initially, but there's absolutely no guarantee that the order of objects won't change in the future. A better approach is to select the individual components you need:

**Good Example:**

.. code-block:: python

    save_button = self.select_single('Button', objectName='btnSave')
    print_button = self.select_single('Button', objectName='btnPrint')

This code will continue to work in the future.