Page MenuHomePhabricator

[services] Tunnelbroker - Tools tests
ClosedPublic

Authored by max on Apr 7 2022, 9:45 AM.
Tags
None
Referenced Files
Unknown Object (File)
Mon, Nov 25, 1:47 PM
Unknown Object (File)
Sat, Nov 16, 10:22 PM
Unknown Object (File)
Fri, Nov 15, 11:26 PM
Unknown Object (File)
Fri, Nov 15, 11:26 PM
Unknown Object (File)
Fri, Nov 15, 9:04 AM
Unknown Object (File)
Fri, Nov 15, 9:04 AM
Unknown Object (File)
Wed, Nov 13, 4:44 AM
Unknown Object (File)
Sat, Nov 9, 2:30 AM

Details

Summary

Tools - is an internal toolset of functions.

We perform tests on such functions:

  • generateRandomString(),
  • validateDeviceID(),
  • validateSessionID().
Test Plan

Run yarn test-tunnelbroker-service all tests are run and succeed.

Diff Detail

Repository
rCOMM Comm
Branch
tunnelbroker-tests-tools
Lint
No Lint Coverage
Unit
No Test Coverage

Event Timeline

services/tunnelbroker/test/ToolsTest.cpp
19–22

We probably shouldn't use randomly generated data for tests.

tomek requested changes to this revision.Apr 8 2022, 12:34 AM
tomek added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
19–22

Yeah, definitely. The issue with randomly generated data is that repeated test runs might have a different outcome (either success or failure depending on the randomness).

If we want to use random data, we should generate it and hardcode in the test.

This revision now requires changes to proceed.Apr 8 2022, 12:34 AM
services/tunnelbroker/test/ToolsTest.cpp
19–22

We probably shouldn't use randomly generated data for tests.

Yeah, definitely. The issue with randomly generated data is that repeated test runs might have a different outcome (either success or failure depending on the randomness).

If we want to use random data, we should generate it and hardcode it in the test.

I don't agree with you guys here. If the function doesn't return consistent results during the randomly generated data (using the expected pattern to generate) something wrong with the realization.
There is no one answer to use or not random data for tests. It's always a compromise.

The only reason not to use the generated data is that test is not repeatable anymore. But the pros of using the generated data are that we test the wide range of input data.

The "monkey test" can smoke out the bugs from the "dark corners" of the implementation.

The folly library (and I think FB too) use random data in the tests. Please take a look at: FBStringTest and ConcurrentSkipListTest for example.

@ashoat can you comment your thoughts here too about FB practice?

services/tunnelbroker/test/ToolsTest.cpp
19–22

So let's suppose your random test failed. Then you changed something and the test succeeded. How would you know if you actually fixed the issue or were just lucky? How many times would you plan to run the test to make sure that it passes?

Agree, random tests are sometimes useful, but it's a lazy approach to let them find corner cases instead of creating a lot of test cases checking each of them.

At least random tests should be an addition to a solid set of tests exploring all the corner cases.

I'm not blanket opposed to tests that generate random data at runtime, but I do think it's always better (if you have time) to break it down into multiple tests that use a representative set of random data that was generated when the test was written. Overall I don't think this matters a whole lot, though – I worry about us spending too much time on tests as opposed to feature / product work.

karol added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
19–22

I think the main purpose of these tests is to assure that new changes don't break anything. With tests using random data, you can never be sure about this.

You can always generate random data and hardcode it in the tests. If you hardcode multiple data sets, you'll also check the consistency of the tests' results.

Another story would be to introduce something like a fuzzer, but I don't think we need it that much.

Prints generated test values. The validation tests were split into two.

karol added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
15–17 ↗(On Diff #11337)

Why do we need these logs?

19–22

I don't think this is solved. Please solve before requesting a review.

This revision now requires changes to proceed.Apr 12 2022, 1:52 AM

Why there is a dot at the end of the diff title?

services/tunnelbroker/test/ToolsTest.cpp
19–22

I think the main purpose of these tests is to assure that new changes don't break anything. With tests using random data, you can never be sure about this.

You can always generate random data and hardcode it in the tests. If you hardcode multiple data sets, you'll also check the consistency of the tests' results.

Another story would be to introduce something like a fuzzer, but I don't think we need it that much.

I've added the generated value printing if the test fails. Now the failed test can be reproducible.

The main point of tests in addition to what has already been mentioned is to find code defects before it goes to the user. Of course, we can not predict all the cases, but we can test the wide range of the input values at a low cost using the generated values.

With the addition of printing the generated value, these tests now are:

  • reproducible,
  • pass the "monkey test",
  • have a wide range of unpredictable values to test.

If we don't have a generated values we can face called "Pesticide paradox":

  • Where repeating the same test cases, again and again, will not find new bugs.

I'm flexible with this and we can use the "mix" of generated and static values in the tests, as we agreed yesterday at the meeting. It's a good point to print the generated values if we have randomly generated values in the tests. This is a good catch!

Also, we can discuss this tomorrow during Office hours if we want to make some rules for it.

max retitled this revision from [services] Tunnelbroker - Tools tests. to [services] Tunnelbroker - Tools tests.Apr 12 2022, 2:08 AM
In D3654#101911, @karol-bisztyga wrote:

Why there is a dot at the end of the diff title?

Removed.

max marked 2 inline comments as done.
max added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
15–17 ↗(On Diff #11337)

Why do we need these logs?

To make the test reproducible.

19–22

I don't think this is solved. Please solve before requesting a review.

Can you describe why it's not solved? Please check my last comment before.

karol added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
19–22

Can you describe why it's not solved? Please check my last comment before.

I added my comment at 1052 and you added yours at 1108 so I couldn't see it. Maybe you forgot to hit "Submit" button - IDK.

You never responded to this:

I think the main purpose of these tests is to assure that new changes don't break anything. With tests using random data, you can never be sure about this.

At least random tests should be an addition to a solid set of tests exploring all the corner cases.

We can do a mix, static + generated data as you said:

I'm flexible with this and we can use the "mix" of generated and static values in the tests, as we agreed yesterday at the meeting. It's a good point to print the generated values if we have randomly generated values in the tests. This is a good catch!

But I cannot see any static data here(or maybe I missed it?).

In all other services we use static data, we should have consistent tests in all services.

This revision now requires changes to proceed.Apr 12 2022, 2:26 AM
max marked 3 inline comments as done.
max added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
19–22

Can you describe why it's not solved? Please check my last comment before.

I added my comment at 1052 and you added yours at 1108 so I couldn't see it. Maybe you forgot to hit "Submit" button - IDK.

I think we have some race conditions here )) I am writing while you are responding )

You never responded to this:

I think the main purpose of these tests is to assure that new changes don't break anything. With tests using random data, you can never be sure about this.

I'm of course agree with the first. But if you are skipping the random tests keep in mind that the random data can break your things - something wrong with it. You should consider validating that data, etc. to prevent this (called the "Monkey test").

We are generating data by the expected pattern. That's why if the expected generated data break things - it's something wrong with the implementation.

I am convinced that implementation needs to be tested with a wider range of values if it's "cheap" in the realization and work cost.

At least random tests should be an addition to a solid set of tests exploring all the corner cases.

We can do a mix, static + generated data as you said:

I'm flexible with this and we can use the "mix" of generated and static values in the tests, as we agreed yesterday at the meeting. It's a good point to print the generated values if we have randomly generated values in the tests. This is a good catch!

But I cannot see any static data here(or maybe I missed it?).

It doesn't make sense to use static data along with the random generated in one test. Because if the expected generated pattern data break the things - something is wrong here. In real-world usage, there will be no static data passed to it.

In all other services we use static data, we should have consistent tests in all services.

I had a question yesterday at our sync - do we need some rules for this to follow? I thought that the solution was - no, no certain rules. So we can use a mix if the test does not take too long to implement.
I don't think using the already existing generator instead of just hardcoding values costs something, but we can avoid creating those generators exclusively for tests if we haven't.

We should consider using better approaches if we can and be flexible or use some rules for the team.

The main concern about using it was that it was not reproducible. Now it is because we will know the values if it fails. In addition, we will have wider value range testing with no additional cost.

This is a widespread practice to use random pattern-generated data in many big projects. I've mentioned folly before as well.

tomek requested changes to this revision.Apr 12 2022, 9:29 AM

It looks like there's a strong disagreement here. We can easily satisfy both approaches by adding some tests with static test data - hardcoded strings. Would you guys @geekbrother @karol-bisztyga agree with that?

services/tunnelbroker/test/ToolsTest.cpp
15–17 ↗(On Diff #11337)

It looks like they provide operator<< for us - nice find!

This revision now requires changes to proceed.Apr 12 2022, 9:29 AM
max marked 2 inline comments as done.

Added tests with static variables along with generated ones. Switched from TEST_F to TEST, because we don't use fixtures here.

In D3654#102087, @palys-swm wrote:

It looks like there's a strong disagreement here. We can easily satisfy both approaches by adding some tests with static test data - hardcoded strings. Would you guys @geekbrother @karol-bisztyga agree with that?

Ok, let's be flexible on it!
I've added tests with the static variables along with the generated ones.

services/tunnelbroker/test/ToolsTest.cpp
15–17 ↗(On Diff #11337)

It looks like they provide operator<< for us - nice find!

Thanks!

tomek requested changes to this revision.Apr 19 2022, 2:26 AM

In this diff naming scheme is really strange. This scheme made some sense in previous diffs, but here I can't find any. As an example of improved name, you can replace TestOperationsOnValidateSessionIDInvalid with ValidateSessionShouldReturnFalseForInvalidSession. We can also consider creating a separate test class ValidateSessionIDTests that has a test ShouldReturnFalseForInvalidSession.

This revision now requires changes to proceed.Apr 19 2022, 2:26 AM

Some high-level feedback for the team here: we are spending a lot of time on tests. It's hard to justify this from a product priority perspective. I think we need to find a way to spend less time writing tests, or alternately to deprioritize automated testing entirely if we aren't able to more immediately reduce the cost of writing tests.

In D3654#104086, @palys-swm wrote:

In this diff naming scheme is really strange. This scheme made some sense in previous diffs, but here I can't find any. As an example of improved name, you can replace TestOperationsOnValidateSessionIDInvalid with ValidateSessionShouldReturnFalseForInvalidSession. We can also consider creating a separate test class ValidateSessionIDTests that has a test ShouldReturnFalseForInvalidSession.

Changed test names.

Some high-level feedback for the team here: we are spending a lot of time on tests. It's hard to justify this from a product priority perspective. I think we need to find a way to spend less time writing tests, or alternately to deprioritize automated testing entirely if we aren't able to more immediately reduce the cost of writing tests.

Agree with this we spent too much time on it.

tomek added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
20 ↗(On Diff #11698)

Please fix this in all the names

This revision is now accepted and ready to land.Apr 21 2022, 7:07 AM

Fixing names *Return* -> *Returns*

max added inline comments.
services/tunnelbroker/test/ToolsTest.cpp
20 ↗(On Diff #11698)

Please fix this in all the names

Thanks. Fixed.

This revision was automatically updated to reflect the committed changes.
max marked an inline comment as done.