Technical Debts - pc2ccs/pc2v9 GitHub Wiki

This article describes numerous "Technical Debts" which are known to exist in the PC2v9 system: design todo's, refactors, and technical deficits tasks. Most items described here require a thorough requirements discussion and analysis.

Refactor Executable

  • re-write the Executable class. The 10+ year-old critical part of the system is in desperate need for a re-write.

  • Execution Time needs to be perfect, at a minimum:

    • if this requires executing an external program, like the sandbox, to precisely measure the execution time.
    • the TLE condition should halt the program at exactly the time limit
  • the test case output needs to be returned via accessor so all validator results are available.

  • 11 years ago when Executable was created there was a single data set and no validator.

  • Remove methods that use IFileViewer, we are now viewing the results with TestResultsFrame (formerly known as MultiTestSetOutputViewerFrame)

  • change the boolean execute() to return an enum and fix the process flow/logic.

  • In executeAndValidateDataSet(dataset) the code is wrong and produced unexpected output that required kludges to work around. A bunch of re-write is needed to return an enum and more importantly to handle execution when there is no validation better. This line in particular needs to be replaced:

    if (executeProgram(dataSetNumber) && isValidated()) {...

  • An enum should be returned that summarizes the run's test results. This enum should be added to ExecutionData class. The enum should include the following values/conditions

    • source compilation error (syntax error or etc)
    • missing compiler (javac.exe specified, but no such file/can't execute)
    • compiler execute failure, cannot run compiler or compiler command line error
    • missing executable (ex. no ISumit.class file exists)
    • team executable failure, cannot run team's solution
    • RTE
    • TLE
    • SV
    • No validator defined - the problem has no validator defined, executed file and potentially produced output but validator could not run, validator program not found, or syntax error on validator command line, etc.

Note that for judgements another fields will have the judgement acronym and judgement text, these enums are summaries about the run:

  • Yes/Accepted - the validator returns a Yes/passed judgement

  • No/Failure - the validator returns No/failed validation.

  • Undetermined - validator did not return enough information to judge as Yes or No (pc2 and external validator only)

  • The execution/validation results should be returned for each test case. Currently there is a single ExecutionData for all test cases, and each test case is overwritten with the last execute/validate info. Provide results for each test case that is run.

  • Keep the getExecutionData() which contains the last ExecutionData then add the method List<ExecutionData> getExecutionDatas() or List<ExecutionData> getTestResultsExecutionData()

SerializedFile masks Exception

SerializedFile doesn't properly throw exceptions; it masks them. Consider a general strategy in terms of how to report errors to the user.

Logging

  • Change so that all output to System.err is done before output to log, see Example below
  • Consider removing most of System.err and adding an appender to the log
  • Improve log viewer
    • no more MCLB
    • add ability to show log in an editor like gvim
    • add a "copy content to clipboard" function

Example where log should be after output to System.err. This should be done so output is written to System.err will be done unconditionally, event if a NPE happens on the getLog() line.

        controller.getLog().log(Log.WARNING, s, exception);
        System.err.println(Thread.currentThread().getName() + " " + s);
        System.err.flush();
        exception.printStackTrace(System.err);

Server site 0 and FauxSite

Both those items were kludges to handle conditions and we never got back to them.

Server 0 site 0 was used to as the destination that meant the destination is "All servers and all clients".

Faux Site... was used as a value because as I remember it the actual server # was not available.

Site Recovery

  • Can site 2 really be started in stand alone mode ?

Multi Site reconnection

  • John found bugs testing reconnection - fixed
  • Add feature to automatically reconnect

Config Recovery

  • We are saving multiple copies of run and settings, but if any of those are corrupted - no automatic recovery

Quick Load

https://pc2.ecs.csus.edu/bugzilla/show_bug.cgi?id=439

Improve Settings

  • Re-design the Settings tab. Use a name/value table where the value can be any component (checkbox, pull down, etc).
  • Create a new UI class for that table.

Setup button bar

https://pc2.ecs.csus.edu/bugzilla/show_bug.cgi?id=1214

Replay

https://pc2.ecs.csus.edu/bugzilla/show_bug.cgi?id=673

Only one substituteAllStrings

Two substituteAllStrings methods should only use ExecuteUtilities.substituteAllStrings

  • ExecuteUtilities
  • Executable

A refactor related to the above was incomplete; the goal was to extract substituteAllStrings from Executable for unit testing purposes and so if there was somewhere else (like the UI) where an example of what a substitute would substitute would not require using Executable.

Multiple ini implementations

Currently there exist two classes: IniFile and Ini which perform (almost) identical functions. These should be merged.

Must be connected to configure other sites/servers

Currently it is not possible to configure other sites if they are not connected. It should be possible to specify the configuration for a (currently) not-connected site. In particular, need to be able to configure the following for ALL (not just "connected") sites:

  • Site Contest Time
  • Accounts

That is, it should be possible to specify the configuration for a site and then when site connects the configuraton settings are applied.

Judgement Loading

Add support for acronyms and in yaml, see more in bug [https://pc2.ecs.csus.edu/bugzilla/show_bug.cgi?id=1222]

Better details about judging

For each test case there should be an accurate judgement with details - in particular for the RTE, TLE, and "halted by operator" (pre-validator) conditions.

There should also be a "details comment" that will provide more information about the judgement. The Pass/Fail column should be changed to the actual judging acronym and changed to a hyperlink. The hyperlink should show the details from the judgement, aka the reason why the judgement was rendered as well as any details/text that the validator outputs.

Auto testing of CDP output validator

Loop through all judge's submissions, validate them and print whether the test case passes or not.
See the separate EWU "CDP Tester" project for a start on this.

Add fat client auto reconnection

Add an option to avoid the Roman numeral countdown and auto reconnect client instead.

Missing Password and merge file generate

There should be a capability for automatic password file generation and merging.

Map Judgements

Provide a way to map judgements from pc2 to a validator.

Maybe each validator has a --list option to output the list of possible judgements. This list would be input to the assigning of validator judgement to the pc2 judgement.

Support validator name mapping in yaml too.

Refactor generateOutput

It is in two places, this method should be in one place.

After end of contest delete runs

PC2 automatically marks runs received by the server after the contest is over (remaining time is 0:00) as "Deleted". In part this was done to avoid those runs being counted in the standings.

Problem: These runs can be un-judged by judges and the contest can be finalized.

We should change from a deleted status to another new status so when finalized a user can be prompted about the un-judged/considered runs.

Question/test: Can a deleted run be manually judged ?

Jetty auth via pc2 accounts

Web Service auth is done from the entries in realm.properties. Change to use pc2 accounts and permissions.

Sandbox execution time includes sandbox startup

The team's program execute time should not include the sandbox startup/cleanup time.

In pc2 the execute time is calculated by the start of the time when the program is run to the end of the time when the program is run.

For a sandbox the program is the sandbox program so the execute time does not reflect the team's programs time, it reflects that time plus all overhead for the sandbox to start and to shutdown.

⚠️ **GitHub.com Fallback** ⚠️