Macros extended course. Part 3 - vilinski/nemerle GitHub Wiki

Table of Contents

Meta-attributes

Many erroneously assume that the power of macros is in their ability to change program syntax. In my view, although this ability is highly attractive in some cases, the main power of macros is in their ability to change code semantics.

Meta-attributes do not change syntax by themselves, but nevertheless, they enable some beautiful and effective solutions.

Many of those who have programmed in C# or VB.NET used attributes. Attributes are an outstanding way to extend code meta-information (by appending additional information to some of the language constructs, such as classes or methods). However, this meta-information can only be used at runtime. Of course, this sometimes produces good effect, but one often wants to process the meta-information at compile time.

.NET programming gurus found two solutions to this problem:

Dynamic code compilation. There are two approaches here. The first is to generate source code in some high level programming language (such as C#), then use a compiler (or a corresponding .NET API for activating it) to dynamically build an assembly. This is how XmlSerializer was implemented in Framework 1.x. The second approach is to use the high level System.Reflection.Emit API (further - SRE). This approach was used, for instance, in Igor Tkachev's (known at RSDN as IT) implementation of Business Logic Toolkit for .NET (BL Toolkit, www.bltoolkit.net).

External code generators that are ran during the compilation process between separate assembly compilation.

What, then, are the advantages and drawbacks of the listed approaches? The advantage of the second approach is in that the generated code is relatively easy to debug. Its drawback is that source code generation in languages like C# is quite an arduous task. Besides, such an approach significantly slows down compilation. In essence, it becomes necessary to compile to assemblies, as well as load one assembly into memory to read information off of it. Another drawback of this approach is that generation cannot use data that might only be accessible during execution. Although, as practice shows, in overwhelming majority of cases generation uses data present at compile time. In addition, compilation can be done at ran time, so the first approach using generation via a high-level language can be viewed as a variation of the second approach.

Approach 1 makes it possible to get code using runtime data, but it slows the program down (after all, it begins compilation during initialization or even during main work). In addition, it is much more difficult to debug and can lead to security holes (after all, generators are overly flexible systems and may be hijacked by hackers).

SRE use is the most flexible and relatively simple way to generate code at runtime, but it is the most difficult to debug and implement.

There is one more code generation method. It uses external programs for generation. CodeSmith and the StringTemplate library are examples of such generators. These solutions typically use a custom templating language. The problem with them is that they work with code as flat text and do not allow metadata extraction. This forces one to describe metadata in external sources, which complicates development.

At this point, many readers are probably wondering where I am going with this. Of course, I argue that meta-attributes are the best way to make attributes come "alive". Meta-attributes are no different from regular C# attributes in appearance, but instead of ending up in the assembly after compilation, they lead to activation of corresponding macros. Everything that you pass the macro through parameters is presented to it in parsed code form (that is, a PExpr). At this time, the parameters' meaning is not interpreted, and you are free to interpret them as you like. And this, by the way, makes it possible to use meta-attributes as containers for your DSLs.

Every macro, including meta-attributes, works at compile-time. As you know, macros can use almost all compiler faculties. This makes it possible to achieve quite elegant solutions and get to the goal with the least amount of effort.

Were the .NET library writers able to use such expressive tools as macros and meta-attributes, many solutions could be achieved with much less effort and end up much more elegant, as well as incomparably more compact and easy to support.

As such, meta-attributes could be used to wrap infrastructure for working with web services, ASP.NET, XML and regular serialization, as well as much, much more, and all without changing the language syntax (after all, changing syntax is most often cited as a drawback by macro opponents)!

I think that by now you understand that meta-attributes are powerful and at the same time useful as hell. It is time to get into the details of their use.

Meta-attribute definition

Meta-attribute is different from a regular macro by having a set of required parameters and a description, set with the attribute MacroUsage.

Here is how a meta-attribute definition looks:

[MacroUsage(MacroPhase.BeforeTypedMembers,
  MacroTargets.Class, Inherited = true)
]
macro MyFirstMetaattribute(_tb : TypeBuilder)
{
  ...
}

The compilation stage at which the macro is called is set with the first parameter, of MacroPhase type. This enumeration has been described in the first part of the article.

What can meta-attributes be applied to?

Like regular user (custom) attributes, meta-attributes can be applied to different code elements. The MacroTargets enumeration, which is a synonym for the standard .NET enumeration System.AttributeTargets, is responsible for the meta-attribute's target. Below are the possible values:

  • Class - the attribute can be applied to any type, including interfaces, enumerations, variants, and delegates. However, Module, Struct, Enum, Interface, Delegate cannot be used as meta-attribute types. If you want to create a meta-attribute that would be applicable to, say, only interfaces, you can find out what type is being compiled from the TypeBuilder object you are given.
  • Method - the attribute is applicable to methods and property accessors.
  • Field - the attribute is applicable to type fields.
  • Property - the attribute is applicable to properties.
  • Event - the attribute is applicable to events.
  • Parameter - the attribute is applicable to method parameters.
  • Assembly - the attribute is applicable to the assembly (that is, a global attribute).
Depending on the value of MacroTargets, the meta-attribute parameter list must contain one or two parameters. The types of these parameters are determined by the stage at which the macro is to be called.

Meta-attribute parameters

Like simple attributes, meta-attributes can have parameters. However, the parameters are not passed as objects (like they are for simple attributes), but as PExpr (that is, AST).

Besides, meta-attributes have additional parameters that are always present. The types of these additional parameters depend on the type of the meta-attribute (set with MacroTargets) and the stage at which it functions (set with MacroPhase). Table 1 shows the relationship between MacroPhase and MacroTarget. The intersections contain parameter types (the first describes the first parameter's tpe, and the second, appropriately, the second's).

MacroTarget MacroPhase
BeforeInheritance BeforeTypedMembers WithTypedMembers
Class TypeBuilder TypeBuilder TypeBuilder
Method TypeBuilder, ParsedMethod TypeBuilder, ParsedMethod TypeBuilder, MethodBuilder
Field TypeBuilder, ParsedField TypeBuilder, ParsedField TypeBuilder, FieldBuilder
Property TypeBuilder, ParsedProperty TypeBuilder, ParsedProperty TypeBuilder, PropertyBuilder
Event TypeBuilder, ParsedEvent TypeBuilder, ParsedEvent TypeBuilder, EventBuilder
Parameter TypeBuilder, ParsedMethod, ParsedParameter TypeBuilder, ParsedMethod, ParsedParameter TypeBuilder, MethodBuilder, ParameterBuilder
Assembly - - -

Table 1. Required meta-attribute parameters for different MacroPhase and MacroTarget values.

I hope you guessed that the blanks in the last row mean that assembly meta-attributes need no required parameters.

Types ParsedField, ParsedMethod, ParsedEvent, and ParsedParameter are actually synonyms to the AST types I described earlier (table 2).

Synonym Actual type
ParsedField ClassMember.Field
ParsedMethod ClassMember.Function
ParsedProperty ClassMember.Property
ParsedEvent ClassMember.Event
ParsedParameter PParameter
ParameterBuilder Nemerle.Compiler.Typedtree.TParameter

Table 2. Relationship between macro parameter type synonyms and actual types in the compiler."

Both types (ClassMember and PParameter) are declared in the namespace Nemerle.Compiler.Parsetree. ClassMember is a variant describing class members, while PParameter is a class describing method parameters.

As has been said in the previous parts, ClassMember construction and analysis can use quotation and pattern matching (for analysis). This significantly simplifies solutions to many problems.

If you take a closer look at table 1, you will see that until the stage MacroPhase.WithTypedMembers, work is done with the AST, while at this stage it is done with Builders (PropertyBuilder, MethodBuilder, etc). However, despite the compilation stage, a type is always described with a TypeBuilder object. Why? In part, this has been done this way, because types need to be placed in namespaces (as was described in the first part of the article), and in part, because of a misunderstanding. In any case, you should be aware that TypeBuilders at different stages contain different information. Before MacroPhase.WithTypedMembers, they contain only AST, while at this stage they contain typed member collections. If you try to request a member list from a TypeBuilder at earlier stages, an exception will be thrown. However, the AST is accessible in all compilation stages. AST is accessible through the AstParts property (in TypeBuilder), because one type can come in several parts (declared with the keyword partial). AstParts lists class parts. If the class is not a partial class, then this list contains a single-element list.

TIP
Always use the AstParts property, or your code may not work correctly with partial classes.
WARNING
Properties AstParts and Ast have appeared relatively recently. It was not always possible to access the AST at every stage.

Required macro parameters can be used to access information about language constructs to which the meta-attribute is applied. These parameters can also often be used to modify the constructs.

Additional meta-attribute parameters

If a meta-attribute requires additional information, then it can be passed via additional macro parameters. Like regular macro parameters, these must have the type PExpr, or be built-in types (int, double, string, ...).

The fact that a meta-attribute receives code (PExpr) lets it interpret code as needed to solve a problem. In fact, it does not have to be possible to compile the code. The main thing is that syntax rules are obeyed.

TIP
Moreover, there is a way to pass a raw token list to a meta-attribute, which does not have to follow syntax rules. However, this requires creation of an additional lexical macro. The only limitation is that the code has to follow Nemerle's lexical rules, which are quite flexible. That said, there is a fly in the ointment. It is still necessary to keep brackets matched and correctly (recursively) nested (this has been explained in the previous parts of the article). Although it suits most DSLs perfectly, this limitation does not allow creation of overly free-form DSLs. I will try to tell about this solution's implementation in the following parts of the article (when I will tell about non-trivial solutions in this area).
NOTE
It is possible that one more macro type will be added in the future: the PreParse macro. It will make it possible to remove the limitation. However, today the more extravagant DSLs can be placed in a new kind of string (recursive strings). These strings are delimited with the symbol combination <# and #>, and also allow nesting and line breaks, i.e. allowing strings like these:
<# Hello, "<# mad #>
  <# 'mad'
    <# mad #> #>"
      world!
#>
This kind of string has been used to create the new DSL NemerleStringTemplate - a text template engine (something like XSLT for objects and with a human face :)

NotNull macro example

Lets get right to the meat of it.

It sometimes happens that one reads a whole volume, yet fails to understand how something works, yet after seeing a usage sample says: "why, it's elementary, my dear Watson!" :). I will thus not bore you with the theory and smoothly transition to showing you the macros.

We will begin with a simple example. Anyone who has written in C# or similar languages had to write routine code for dealing with standard parameter verification checks. For instance, it is often necessary to check whether a parameter's value is null. The code we have to write looks approximately like this:

public static Method(parameter : string) : void
{
  Assert(parameter : object != null, <#The "NotNull" parameter contract #>
                                    <#"parameter" has been violated.#>);
  def len = parameter.Length;
  _ = len; // So some work...
}
NOTE
Assert – is from System.Diagnostics.Trace.Assert(). Recall that Nemerle can expand not only namespaces, but also classes.

What is so bad about this code? The fact that it is code is bad :). It litters the method's code, often mixing with it, and, most unpleasantly, has to be written manually and creates opportunity for error. Another problem is that one cannot tell whether a method supports null values or not just by reading its definition.

An experienced programmer will notice that one could create a static helper method that will greatly increase code clarity. Here is how such a method might look:

public module Helper
{
  public AssertNotNull(param : object, paramName : string) : void
  {
     Assert(param != null, <#The "NotNull" parameter contract #>
                        + $<#"$paramName" has been violated.#>);
  }
}

Then our hypothetical method would look like this:

public static Method(parameter : string) : void
{
  AssertNotNull(parameter, "parameter");
  def len = parameter.Length;
  _ = len; // Do some work...
}

Better than the first option? Undoubtedly!

Are there problems left? Unfortunately — yes.

One problem is that the programmer is forced to duplicate the parameter's name: once to pass the parameter's value to the method AssertNotNull and once to pass it the parameter's name. In addition, the code is located in the method's body, which adds the risk that it will be inadvertently moved, deleted, or modified. And since code is code, we cannot tell anything about the method's behaviour just by looking at its description. In general, we need to be more declarative.

What if we create the attribute NotNull that would tell us that a parameter it refers to does not allow null values (or, in contract-oriented programming terms, supports the NotNull contract)?

Awesome! Yes... but the attribute cannot do anything by itself. We would still have to insert a function call into the method body, which would use reflection to acquire information and implement logic. This means that we do not get rid of handwritten code (are not fully declarative). Besides, work through reflection adds significant overhead, which would almost certainly be unacceptable in many places in a program.

In general, most C# programmers would stop at the previous solution (using the method AssertNotNull), but we have a magic wand in our hands - meta-attributes! With their help we can make attributes adjust the program logic and do it behind the scenes, without adding any additional problems to our life.

Here is how our macro's carcass would look:

[MacroUsage(MacroPhase.BeforeInheritance, MacroTargets.Parameter,
            Inherited = true, AllowMultiple = false)]
macro NotNull(_ : TypeBuilder, m : ParsedMethod, p : ParsedParameter)
{
  Message.Hint($"Parameter type m: $(m.GetType()) - '$m'");
  Message.Hint($"Parameter type p: $(p.GetType()) - '$p'");
}

The macros does not yet do anything useful, but we can apply it and make sure that it works.

Lets write code testing this attribute:

public class A
{
  public static Method1([NotNull] parameter : string) : void
  {
    WriteLine(parameter)
  }
}

If you hvae configured the solution correctly, two messages will be printed to the VS console:

 ...hint: Parameter type m: Nemerle.Compiler.Parsetree.ClassMember+Function –
   'Function: public static Method1(parameter : string) : void ;'
 ...hint: Parameter type p: Nemerle.Compiler.Parsetree.PParameter –
   'parameter : string'

Everything is as it was described in the theoretical part. We have two objects. One describes the method. The second — parameter.

Now we need to generate verification code (as if we wrote it manually) and add it to the method's beginning. Here is this code:

[MacroUsage(MacroPhase.BeforeInheritance, MacroTargets.Parameter,
            Inherited = true, AllowMultiple = false)]
macro NotNull(_ : TypeBuilder, m : ParsedMethod, p : ParsedParameter)
{
  def msg = <#The "NotNull" parameter contract "#>
          + $<#$(p.Name)" has been violated.#>;

  m.Body = <[
    Assert($(p.ParsedName : name) : object != null, $(msg : string));
    $(m.Body)
  ]>;
}

Nothing complicated, but I will explain it, just in case.

The first line builds a string, which will be displayed in a message box, should the Assert check fail:

def msg = <#The "NotNull" parameter contract "#>
        + $<#$(p.Name)" has been violated.#>;

Since the parameter name is formed based on information received from the compiler, there is no need to pass this name to the macro manually. This, in turn, means there is simply no opportunity for us to make an error.

The following expression forms the method body with the help of quotation, and replaces the old code:

m.Body = <[
  Assert($(p.ParsedName : name) : object != null, $(msg : string));
  $(m.Body)
]>;

The ParsedName property contains the name in the internal compiler representation. The specification "... : name" tells the compiler that it is given a name, not an expression (PExpr). Msg already contains a string, of which we also inform the compiler.

NOTE
Of course, we would prefer for the compiler to figure out the slice area types, but at this time it needs these hints.

The result is Assert call code generation, analogous to manual code, but with the variable's name in it formed automatically, based on information passed to the macro by the compiler.

The expression $(m.Body) simply places the method body into that spot.

The result is that we add the necessary check to the method body. If the method contains more than one parameter, the operation will be performed several times, and the body will end up containing all the necessary checks.

ADVICE
Notice the fact that the code does not depend on any other parts of the method. This makes it possible to not pay attention to the order in which the given macro is called for the different method parameters. When you design your own macro, try to do the same.

Lets try it...

We now need to add a call to the test method:

module Program
{
  Main() : void
  {
    A.Method1(null);
  }
}

build the project and run it. Then you will see the standard Assert window (see figure 1).

Figure 1. The "Assert" dialog, reporting that the NotNull contract has been broken.

I think that I need not explain that you can ignore the problem or press "Retry" and move to the point where the NotNull contract has been broken. Although, I guess I just did explain it. :)

Problems...

Everything is great, but this macro leads to a problem, which, due to the method's utter simplicity, might have evaded your attention. Nevertheless, the problem exists, and it is a serious one — we loose IntelliSense for the method to which we apply this meta-attribute.

This happens, because we modified not only the method body, but also information about its location. This information is used by the Visual Studio integration package to calculate method locations and make decisions on whether editing occurs within a method or outside one.

Specifically, code generated with quotation has in its Location property the flag IsGenerated.

NOTE
I already described the Location structure, but I should mention it again. It sets a program fragment's coordinates in the source file. The lexer stores locations inside tokens, and these are later moved to all AST data structures. The compiler uses them to inform the programmer about error locations in the source file.

To fix this problem, we need to give the method body its original Location value.

This can be done by saving it in a local variable before modifying method body, then restoring it after. Here is how the code looks now:

[MacroUsage(MacroPhase.BeforeInheritance, MacroTargets.Parameter,
            Inherited = true, AllowMultiple = false)]
macro NotNull(_ : TypeBuilder, m : ParsedMethod, p : ParsedParameter)
{
  def loc = m.Body.Location; // remember location
  def msg = <#The "NotNull" parameter contract "#>
          + $<#$(p.Name)" has been violated.#>;

  m.Body = <[
    Assert($(p.ParsedName : name) : object != null, $(msg : string));
    $(m.Body)
  ]>;

  m.Body.Location = loc; // restore IsGenerated as false
}

If we compile the solution now, IntelliSense will work correctly.

Remove unnecessary overhead...

And so, in mere minutes we managed to create a macro that saves us from manual work and possible errors resulting from it, while making our code more declarative!

What else could we dream of?

It turns out that we could dream about improving performance. Of course, .NET JIT is pretty great, and can at times remove performance overhead. If I was writing the code manually, I would worry about readability first of all. This way, I would have avoided making additional checks just to get rid of a method invocation (we mean the Assert method here). But, in this case the code is generated automatically, using a single template. It will not be visible to the programmer using the macro. We can therefore modify the macro like this:

[MacroUsage(MacroPhase.BeforeInheritance, MacroTargets.Parameter,
            Inherited = true, AllowMultiple = false)]
macro NotNull(_ : TypeBuilder, m : ParsedMethod, p : ParsedParameter)
{
  def loc = m.Body.Location;
  def msg = <#The "NotNull" parameter contract "#>
         + $<#$(p.Name)" has been violated.#>;

  m.Body = <[
    when ($(p.ParsedName : name) : object == null)
      Assert(false, $(msg : string));
    $(m.Body)
  ]>;

  m.Body.Location = loc;
}

Aspiring to perfection...

As it is, the macro is production-ready. That is, it can already be used in real projects.

NOTE
Moreover, a similar macro already exists in the standard Nemerle library. This is Nemerle.Assertions.NotNull, the code for which is located in the file https://github.com/rsdn/nemerle/tree/master/macros/ssertions.n (along with numerous other useful macros intended for programming in Design by contract style). Descriptions of these macros can be found at http://nemerle.org/Design_by_contract_macros.

However, there is a small issue with the macro, which is, really, nothing, but allows us to demonstrate how to get type information.

The issue is that the macro we have created could be applied to reference types, as well as value types. For instance, we could modify the example code in the following manner:

public class A
{
  public static Method1([NotNull] parameter : int) : void
  {
    WriteLine(parameter)
  }
}

module Program
{
  Main() : void
  {

    A.Method1(1);
  }
}

Notice that the string — a reference type, has been replaced by an integer — a value type.

When we attempt to compile this code, it compiles without a hitch and will work, but the verification code will perform boxing. And who wants to get boxing in a performance-critical method, because of a small oversight? Certainly not me! I would want the compiler to not generate dumb code, but to give me a warning about unwise application of the macro (an error would probably be unwarranted, since there is nothing criminal in this). If non-optimality for simple value types could be forgiven, the nullable types, which are also value types, would make it a definite problem! We should really add a check here and react differently to parameters of reference types, regular value types, and nullable types.

What prevents us from adding such a check? The thing is that at the stage we chose (MacroPhase.BeforeTypedMembers) it is fairly difficult to get information about parameter types (it could be done with some effort). As I mentioned earlier, parameter type information can easily be accessed at the stage MacroPhase.WithTypedMembers. I also mentioned that this stage makes it difficult to modify type and member descriptions, but is perfectly suited for macros that generate or modify type member information (including methods). Actually, the stage MacroPhase.WIthTypedMembers is perfectly suited for our purposes.

Below we give macro code that is modified to use the stage MacroPhase.WithTypedMembers to handle the situation when the meta-attribute is applied to a parameter, which type does not support null. Besides, the macro adds special support for nullable types.

[MacroUsage(MacroPhase.WithTypedMembers, MacroTargets.Parameter,
            Inherited = true, AllowMultiple = false)]
macro NotNull(_ : TypeBuilder, m : MethodBuilder, p : ParameterBuilder)
{
  if (p.ty.CanBeNull)
  {
    def loc = m.Body.Location;
    def msg = <#The "NotNull" parameter contract "#>
            + $<#$(p.Name)" has been violated.#>;

    def name = <[ $(p.AsParsed().ParsedName : name) ]>;
    def condition = if (p.ty.Fix().IsValueType) name
                    else                        <[ $name : object ]>;

    m.Body = <[
      when ($condition == null)
        Assert(false, $(msg : string));

      $(m.Body)
    ]>;

    m.Body.Location = loc;
  }
  else
    Message.Warning(p.Location,
      $"Parametr '$(p.Name)' has type '$(p.ty)' which not support null");
}

The main modification in this code is a compilation stage change. The macro now executes at MacroPhase.WithTypedMembers. In accordance with table 1, we replaced ParsedMethod with MethodBuilder and ParsedParameter with parameterBuilder (I must remind you that, according to table 2, ParameterBuilder is a synonym for Typedtree.TParameter, which describes a typed parameter). This allows us to use information about the parameter types (deduced from the parameter before this stage). Luckily, type descriptions in Nemerle have the property CaBeNull, which answers the question of whether the type instance can take on the null value. Moreover, this property is quite smart and knows about existence of nullable types (taking on the value of true for them also).

The conditional expression's perturbations deserve a special mention. This time it is set not on the spot, but with the "condition" variable. This is required, because nullable types, unlike references, cannot be cast to object. Well, they could be, but they should not be, because it would lead to boxing. To put it simply, the code:

def value = null : int?;

when (value == null)
  WriteLine("The variable 'value' has no value");

turns into the following MSIL:

 L_0000: nop
 L_0001: nop
 L_0002: ldloca.s nullable
 L_0004: initobj [mscorlib]System.Nullable`1<int32>
 L_000a: ldloc.0
 L_000b: stloc.1
 L_000c: nop
 L_000d: nop
 L_000e: ldloca.s 'value'
 L_0010: call instance bool [mscorlib]System.Nullable`1<int32>::get_HasValue()
 L_0015: ldc.i4.0
 L_0016: ceq
 L_0018: brfalse L_002e

but code:

def value = null : int?;

when (value : object == null)
  WriteLine("The variable 'value' has no value");

turns into:

 L_0000: nop
 L_0001: nop
 L_0002: ldloca.s nullable
 L_0004: initobj [mscorlib]System.Nullable`1<int32>
 L_000a: ldloc.0
 L_000b: stloc.1
 L_000c: nop
 L_000d: nop
 L_000e: ldloc.1
 L_000f: box [mscorlib]System.Nullable`1<int32>
 L_0014: ldnull
 L_0015: ceq
 L_0017: brfalse L_002d

That is, the compiler recognizes that a nullable type is used and rewrites the code to make it use a property of this type (such as HasValue), but when it is cast to object, the compiler does what it is told unquestioningly and boxes the value. Everything works correctly, since the CLR has special code recognizing boxed nullable types, but we loose some performance. So, the following code fragment:

  def name = <[ $(p.AsParsed().ParsedName : name) ]>;
  def condition = if (p.ty.Fix().IsValueType) name
                  else                        <[ $name : object ]>;

forms a suitable expression for the type being processed in the variable "condition". For reference types it generates the expression:

variable_name : object

while for value types (which could only be a nullable type, given that we already performed the check for CanBeNull) the variable name is generated. This expression is used in the "when" operator:

  m.Body = <[
    when ($condition == null)
      Assert(false, $(msg : string));
 ...

Which gives us the desired result.

The method AsParsed makes it possible to retrieve a parameter's representation as a ParsedParameter. You need it to get the parameter's name in the compiler's internal format (for use in quotation).

In case the type supports the null value (i.e. a reference type or nullable), the macro generates verification code. If the type does not support null values, the user is given a warning, and verification code is not generated.

This is it — the power of macros in a statically typed language!

My inquisitive mind now simply demands a scientific experiment! :) Lets try to compile the sample code that we modified previously (the one, in which parameter type is changed to int)... As expected, the compiler outputs the following warning:

 ...\Main.n(25,35):Warning: Parameter 'parameter'
 has type 'int' which not support null

Now, there is not even need to compile the code. The warning appears in the IDE, immediately after modifying the parameter's type from string to int (figure 2).

Figure 2. A warning generated by the NotNull macro is shown in the IDE.

Lets test the discussion about nullable type support. Add to the int (parameter type) "?" (that is, change int to Nullable[int]).

public class A
{
  public static Method1([NotNull] parameter : int?) : void
  {
    WriteLine(parameter)
  }
}

module Program
{
  Main() : void
  {

    A.Method1(1);
  }
}

Dear god! The warning is gone! :)

Now replace the call again to make the method accept null as a parameter:

A.Method1(null);

Run the test application... the Assert dialog is there.

Decompiling the code shows high quality of the generated code :).

Now, lets think...

In the end, our macros turned out to be less simple than the initial description suggested. We had to make an effort to avoid breaking the IDE. We had to figure out how to work with types, show a warning, change the stage at which the macro executes, but in the end we achieved what we set out to do and did it fairly simply. Where else can one extend the compiler's and the IDE's abilities so easily? I guess, only in Lisp. But Lisp is not for everyone, and it is definitely not for those who want to get the most out of .NET and static typing.

Types are important...

Our macro is not inferior to built-in compiler features in either functionality or quality, though our goal was not to create a specific macro, but to demonstrate how to create macros in general.

When you set out to develop a complicated macro, your greatest challenge is work with types. The NotNull meta-attribute uses type information, but limited to that, which, so to speak, lays at the surface. In the real world, this will become insufficient.

In the next part of this article I will tell you how to work with types.

References

This text is based on an article from RSDN Magazine #3-2007 by Vlad Chistiakov (VladD2).

⚠️ **GitHub.com Fallback** ⚠️