Difference between revisions of "Globalyzer Server and Rule Sets"

From Lingoport Wiki
Jump to: navigation, search
(In what order are Rule Set detection and filtering applied?)
(In what order are Rule Set detection and filtering applied?)
Line 33: Line 33:
 
#General Patterns
 
#General Patterns
 
#Static File References
 
#Static File References
  +
   
 
[[file:Ruleset_rules.png | 500px]]
 
[[file:Ruleset_rules.png | 500px]]
  +
   
 
For Embedded Strings,
 
For Embedded Strings,

Revision as of 22:31, 29 May 2019

Contents

How can Globalyzer be leveraged across my development team?

Software internationalization involves a surprisingly complex variety of issues, depending upon your requirements, technologies and unique application needs. Globalyzer’s client/server architecture lets you build and store internationalization detection and filtering rules (which we call Rule Sets), and which can then be shared within your development team. Developers can then perform internationalization tasks efficiently, whether they are working in the next cube or in the next country.

If we customize one rule set for some specific files, how easy is it to apply the same rule set for other projects?

Multiple scans in multiple projects can use the same rule set.

What are the server requirements for hosting a Globalyzer Server on premises?

Globalyzer Server is either hosted by Lingoport, on premises, or a cloud-based server. The server requirements change as technology changes. It is best to work with Lingoport to determine the best size server to use.

For the on premises installation, the standard IT setup for a virtual server is

  • 4 GB memory,
  • 160 GB disk space, and
  • 2 CPUs.

If anticipating heavy use, may need up to

  • 8 GB memory,
  • 500 GB disk space, and
  • 4 CPUs

Rule Sets

Can I modify the Default Rule Sets?

If you are hosting the Globalyzer Server, the Administrator user can modify any of the Default Rule Sets.

If Lingoport is hosting the Globalyzer Server, you will not be able to modify the Default Rule Sets, but you can modify one of your own Rule Sets and then allow your team members to either use it (sharing) or copy it. This Rule Set can then be the starting point for their internationalization scanning and filtering process.

In what order are Rule Set detection and filtering applied?

There are four categories of results:

  1. Embedded Strings
  2. Locale-Sensitive Methods
  3. General Patterns
  4. Static File References


Ruleset rules.png


For Embedded Strings,

  • All strings are found as issues initially, then
  • The string filters are run (all except String Method Filter) to see what should be filtered
    • String Line Filter
    • String Literal Filter
    • String Operand Filter
  • The detection patterns are run to see what should be retained
    • String Method Pattern
    • String Operand Pattern
    • String Retention Pattern
  • String Method Filter is run last. We used to run String Method Filters along with the other filters, but found in practice that String Method Filters should trump all detections, and be run last.

For the other three categories, the patterns are run first to find the issues, and then the filters are run to remove.

I changed the Rule Set but the Workbench keeps using the old Rule Set. How can I use the updated Rule Set?

When rule sets are modified on the server, select Project->Reload Rule Sets to refresh the client.

We have lots of Rule Sets: Is there a way to organize Rule Set to better manage them?

Globalyzer supports Inherited Rule Sets.

Base rule sets can be created and maintained by an individual and then project level rule sets can extend the base rule set. The extended rule set would have everything from the base rule set, plus whatever is added/modified.

When an individual introduces a new rule or modifies a rule in the project level rule set, other projects wouldn't be affected.

Where can I find help on “General Pattern” issues found in C++ code scanning?

If you login to the Globalyzer Server and look at the General Patterns for your rule set, it will often give information on why Globalyzer is scanning for this pattern. In addition, if you go to the Help system on the Globalyzer Server, there are various topics on C++ internationalization. In particular, click on Reference->Locale-Sensitive Methods->C++ Programming Language->C++ Rule Sets. This help page talks about Unicode support in the various C++ rule sets. For example, usually a C++ program will be compiled with single-byte character strings. These single-bytes cannot support Unicode characters, which require more than 1 byte. That is the main reason why our C++ General Patterns scan for character strings: You will have to make sure to modify them if they are to hold Unicode strings.

Does Globalyzer fix JavaScript locale-sensitive method issues?

Globalyzer detects methods that could be an issue when supporting multiple languages, but has no specific fixing built in. This is because it’s not always clear that the method is an actual issue and the fix may involve some reworking that requires manual decisions. However, for some programming languages, we have written internationalization (i18n) help for the method that explains the reason for the detection as well as suggestions on what change might need to be made. When we don’t provide specific i18n help, we provide links to external help on the method, which sometimes provide information about i18n considerations.

What is the fix for the JavaScript locale-sensitive method charAt()?

In this case, Globalyzer detected charAt because it is a method that indexes into a string. If that string contains a translation, then the location of the character may have changed or it may not be the same character. The fix is really dependent on the usage. If the string is locale-independent, then you can insert an Ignore This Line comment so that Globalyzer will no longer flag this issue.

When will internationalization help be added for JavaScript locale-sensitive methods?

We are always pressed to get more features into Globalyzer, but do try to spend as much time as possible adding to the help. In the meantime, if you have any specific questions, you should email support@lingoport.com and we’ll get an answer for you right away!

How do I add new JavaScript locale-sensitive methods or modify the description and help for existing methods?

If you have a Globalyzer Team Server license, you can add to or modify the default Locale-Sensitive Methods for each programming language so that your users will also see your changes whenever they create a new Rule Set. If you’re using our hosted globalyzer.com server, you can add to or modify the Locale-Sensitive Methods of a specific Rule Set that you create and then share with other Globalyzer users that are part of your team. That way, your team members will benefit from the work you have done in determining the resolution for Locale-Sensitive Method issues. This approach applies to all Rule Set rules, such as General Patterns, Static File References, and Embedded Strings.

When you create your rule set, can you specify the file extensions you would like scanned?

The default for a java rule set is to scan files with the following extensions: java, jsp, jspf, and jspx. If you are only interested in jsp files, you can disable the others. Steps to do this:

1) Log in to the server and select your java rule set

2) Select Configure Source File Extensions

3) Uncheck the file extensions you are not interested in


You can also configure the scan to look at only certain directories in your project. Steps to do this:

1) Log into the client.

2) Select your project in Project Explorer and select Scan->Manage Scans

3) Select the java scan and select Modify.

4) Select the specific directories to scan, not just the entire project, and Finish


You can run your scan by selecting Scan->Single Scan.

How do I create a rule set for the 'C' language?

For the C language, you should choose one of our C++ variants. The main ones are ANSI UTF-8, ANSI UTF-16, Cross Platform UTF-8, Cross Platform UTF-16, Windows Generic, Windows MBCS, and Windows Unicode. If you are using GNU C, you will want to use one of the ANSI rule sets. UTF-8 if that’s how you want to support Unicode; UTF-16 if you will be using wide-character calls to support UTF-16 Unicode. If you are just running on Windows, then you can choose a Windows variant. If you’ll be running on both, then you’ll need a cross-platform rule set. The difference between the variants is the list of locale-sensitive methods Globalyzer will scan for in your code. To get a better feel, you can create a few rule set with the different variants and look at the locale-sensitive methods defined.

How to define a regex pattern in a rule set

When specifying regex patterns with UTF-8 characters, you need to specify the characters like this: \uXXXX where XXXX is the hexidecimal number for the character.

For example, if I have this string: "中国" Then I would specify this general pattern to find it: \u4E2D\u56FD

http://www.regular-expressions.info/unicode.html

How do I create a regex pattern for Registered, Trademark, Copyright symbols?

To detect/filter characters such as ® (Registered), please use the Unicode code point in the regex. For instance,

  • ® (Registered): \u00AE
  • ™ (Trademark): \u2122
  • © (Copyright): \u00A9

Searching for characters in a specified language

It may be useful to detect all strings that contain characters from a specific language. For instance, finding all strings of Chinese characters within an application. This can be done using Unicode scripts. Here are a few examples:

Chinese: \p{script=Han}

Korean: \p{script=Hangul}


Some languages, such as Japanese, combine multiple scripts.

Japanese: [\p{script=Hiragana}\p{script=Katakana}\p{script=Han}]

Searching for characters in a specified language with character ranges

Unicode scripts are not the only way to find Characters within a specified language. While they are the simplest means to do this, they are not supported in all environments. For instance, Java 1.6 does not support regex searches using Unicode scripts.

Another solution is to use Unicode character ranges. For instance, CJK Unified Ideographs represent the most common Chinese characters. The basic set of CJK Unified Ideographs are all contained within the character range \u4e00-\u9fd5. To search for strings containing these characters, create a String Retention Pattern with the following regex pattern:

[\u4e00-\u9fd5]+

Additional Chinese Ideograph characters fall within the ranges of:

  • \u3400-\u4db5 (CJK Unifed Ideographs extension a)
  • \u20000-\u2a6d6 (CJK Unifed Ideographs extension b)
  • \u2a700-\u2b734 (CJK Unifed Ideographs extension c)

Multiple character ranges can be used to create a single expanded character set, like so:

[\u4e00-\u9fd5\u3400-\u4db5\u20000-\u2a6d6\u2a700-\u2b734]+

The above regex expression will find strings of one or more Chinese characters from any of the Ideograph sets.

For Example: ChineseStringRetentionPatternAllChineseCharacters.png

Character Ranges for Korean, Chinese and Japanese

Korean (Hangul)

Chinese (Han)

Japanese (Katakana, Hiragana, Kanji)

At a glance:
Full details: