The Israeli Workshop on
Programming Languages & Development Environments
July 1, 2002

Organized by IBM Haifa Research Lab, Haifa University, Israel


  The Play-in/Play-out Approach and Tool: Specifying and Executing Behavioral Requirements
David Harel, Hillel Kugler and Rami Marelly, The Weizmann Institute of Science.

Abstract: A powerful methodology for specifying scenario-based requirements of reactive systems is described, in which the behavior is "played in" directly from the system's GUI or some abstract version thereof, and can then be "played out". The approach is supported and illustrated by a tool, which we call the play-engine. As the requirements are played in, the play-engine automatically generates a formal version in the language of live sequence charts (LSCs). As they are played out, it causes the application to react according to the universal ("must") parts of the specification; the existential ("may") parts can be monitored to check their successful completion.

Play-in is a user-friendly high-level way of specifying behavior and play-out is a rather surprising way of working with a fully operational system directly from its inter-object requirements. The play-out execution mechanism is enhanced with a "smart" play-out module, in which verification techniques, mainly model-checking, are used both to drive the model and to satisfy system tests, expressed as existential LSCs.

The ideas appear to be relevant to many stages of system development, including requirements engineering, specification, testing, analysis and implementation.

A Case For Sealing Classes In Java
Marina Biberstein, Vugranam C. Sreedhar and Ayal Zaks, IBM Research.

Abstract: It is a well-known fact that inheritance as defined in most existing object-oriented languages breaks encapsulation in a very subtle way. For instance, Java provides facilities for encapsulation both at the class-level and at the package-level. But it also introduces language constructs (such as public/ protected modifier and inheritance) that lets clients to access the internals of a class in a subtle way.

In this paper we introduce the notion of class sealing in Java that selectively prevent clients of a package from accessing the internals of a class via inheritance. Class sealing allows extension of classes within a package, but prevent clients from extending such classes outside of the package. We will discuss three areas of application of class sealing: maintenance, security, and program optimization. We provide empirical evidence, collected from large bodies of real-life code, showing that sealed classes fit well within existing coding practice.

Towards Aspect Architectures and Remodularization
Mika Katara and Shmuel Katz, Technion.

Abstract: To cope with complexity, software must be divided into modules so that different concerns, or matters of interest, are localized. Examples of concerns include features, performance, security etc. However, conventional languages and notations do not support the modularization of concerns that cut across the dominant dimension of decomposition, for instance, an object or function hierarchy. Recently, many aspect-oriented approaches have been introduced to support better modularization of software. While offering advantages over the conventional approaches, they lack support for expressing relationships among the different aspects. Also, complex reconciliation may be needed when composing the aspects. In this paper we propose a language-independent model of an aspect architecture making these relationships explicit and revealing common sub-aspects interesting in their own right. Also, by defining the overlapping parts of the different aspects explicitly, their composition becomes more automatic. The aspect architecture consists of augmentations to existing designs where the different aspects correspond to collections of the augmentations. To support changing concerns over the software life cycle and between stake holders, we outline how to remodularize the architecture to match better the new concerns. In principle, each augmentation can introduce only a small detail of the system. Complex augmentations can then be built recursively from simpler ones using sequential and parallel composition.

Adapser: an LALR(1) Adaptive Parser
Adam Carmi, Technion.

Abstract: An adaptive parser is capable of adapting its parse tables and parse stack to incremental and decremental grammar modifications while parsing. Adapser is a C++ template library that implements an adaptive LALR(1) parser. It is designed to be used as a framework for constructing compilers that support parse-time grammar modifications.

Adapser is applicable in several contexts:
  1. It is well suited for parsing extensible languages that provide constructs for extending their own grammar. This reduces the 'conceptual distance' between the base general-purpose language and specific application domains.
  2. It may be used as a framework for systems in which the efficiency of the parser generator itself is crucial e.g. during the development stages of a language.
  3. It may be used to implement compilers that are able to handle context-dependent aspects of a grammar and other common tasks (e.g. type checking, type inference, declaration scope, overloading resolution, etc.) in the context of syntax analysis, rather than (as traditionally done) in the semantic analysis compilation phase. This approach results in a cleaner separation of responsibilities between compilation phases and reduces the complexity of semantic analysis.
Adapser embodies several novel algorithms. Unlike other adaptive parser implementations, Adapser's algorithms are based on the LALR(1) class of grammars which are known to be well suited for describing most programming languages.

As a tool designed for practical usage, Adapser's algorithms were optimized for maximal performance (lazy and incremental evaluation of grammar changes), but not at the expense of other important features such as error recovery and parsing of a static grammar.

A Specification-Oriented Framework
Eliezer Kantorowitz, Technion.

Abstract: A costly part of software development regards verification, i.e., checking that the code implements the specification correctly. We introduce the concept of specification-oriented frameworks, with the purpose of facilitating verification. A specification-oriented framework enables direct translation of the specifications into code, whose equivalence with the specification is easy to establish. The feasibility of this concept was investigated with the experimental S Si im mp pl le e I In nt te er rf fa ac ci in ng g (S SI I) framework for construction of user interface software of interactive information systems. Such an information system is constructed by translating the natural-language use-case specification into S SI I based code. S SI I provides high-level methods for data entry and data display. The use-case coding assumes that a database schema and required complex data manipulations are developed separately. The code produced for the use-cases of five small projects corresponded quite closely to the natural-language specifications and facilitated the verification. The produced code provides traces to corresponding use cases. This is expected to facilitate later extensions and modifications of the software. Further information may be found in

Asset Locator – A Framework for Enterprise Software Asset Management
Avi Yaeli, Alex Akilov, Sara Porat, Iftach Ragoler, Shlomit Shachor-Ifergan and Gabi Zodik, IBM Research.

Abstract: This paper introduces the Enterprise Software Asset Management (ESAM) paradigm, which defines an approach to doing automated software asset management. ESAM is a comprehensive integrated solution supporting search and reuse, collaboration, knowledge sharing, impact analysis, and other enterprise-centric services. We describe Asset Locator, a low-cost, scalable and extensible solution that realizes ESAM. Asset Locator uses a set of autonomous scheduled crawlers that scan enterprise repositories to discover development resources. A set of domain-specific analyzers process the discovered resources by identifying and extracting semantic features. Powerful search and navigation engines enable clients to explore the analyzed information. The design of Asset Locator as an extensible framework has enabled its easy integration into several IBM product offerings.

Development/Maintenance/Reuse: Software Evolution in Product Lines
Stephen R. Schach, Vanderbilt University and Amir Tomer, Rafael.

Abstract: The evolution tree model is a two-dimensional model that describes how the versions of the artifacts of a software product evolve. The propagation graph is a data structure that can be used for effective control of the evolution of the artifacts of a software product. In this paper we extend the evolution tree model and propagation graph to handle the evolution of a software product line.

Software product lines are characterized by large-scale reuse, especially of core assets. We show how a third dimension can be added to the evolution tree model to handle this reuse. In particular, the new model incorporates bidirectional reuse within product lines. That is, the new model can handle the transfer of an artifact from the core assets repository to a specific product (acquiring a core asset) as well as the transfer of a specific asset from a specific product to the core assets repository (mining an existing asset).

Applications of Concept Lattices to Code Inspection and Review
Uri Dekel, Technion.

Abstract: Given a small universe comprising a set of items and their properties, there is a well known mathematical procedure for creating the lattice of concepts in this universe, where each concept is a coherent set of items with their distinctive features. In this research we show how this technique can be employed to two recurring tasks of the JAVA programmer: (1) reverse engineering of class files, and (2) clustering, structuring, and understanding the methods of a given class.

We view the set of data members used by a Java method as its essential characteristics. With this perspective the construction of the class concept lattice is straightforward, yielding a concise visualization of the class. For the purpose of class understanding and reverse engineering, we offer a methodology for structured analysis of this lattice. A large real life example demonstrates how non-trivial discoveries on the class implementation can be made by solely inspecting the lattice and the names of methods and data members. This example also shows how concept lattices can be used to compare different versions of the class.

A more detailed visualization of a class can be obtained by superimposing the function call graph on the class concept lattice.

We describe a semi-automatic technique, which using the class concept lattice and the function call graph, will cluster, organize, sort and otherwise structure the code of a given Java class so as to achieve an efficient path for code inspection.

Model Based Rapid Application Development
Netta Shani and Shiri Davidson, IBM Research.

Abstract: Web applications become increasingly common as means for accessing online applications. These applications involve both client side and server side components. Therefore it is important to model both components in a single model i.e., introduce an application end-to-end model. Such a model should contain all four application layers: data, user interaction, flow, and business logic.

Many published models and standards approach a single layer or a subset of these layers. They do not aim at modeling a complete end-to-end application.

This paper presents a programming model that represents a complete end-to-end application. It represents an application at a high level of abstraction and thus frees the user from low level implementation related details. Therefore it is appropriate for rapid application development (RAD). A RAD environment that is developed on top of this model is briefly described.

Modeling Code Mobility Paradigms in OPM/Web
Iris Reinhartz-Berger, Dov Dori, and Shmuel Katz, Technion.

Abstract: Web applications are expected to exhibit dynamic behavior through such features as animation, dynamic presentations, or filling in interactive forms. The tradeoff between evolving functional requirements and existing bandwidth limitations requires addressing code mobility issues to enable dynamic reconfiguration of the binding between the software components and their physical locations. Adding the mobility capability to distributed applications supports disconnected operations and can enhance system flexibility, reduce bandwidth consumption and total completion time, and improve fault tolerance.

In this paper we propose generic models for the transfer stage of the code migration for each one of the common design paradigms. This stage includes (1) a decision as to when and if to transfer the code and (2) its actual transfer. The models proposed in this work use OPM/Web, an extension of the Object-Process Methodology (OPM) to distributed systems and Web applications. OPM/Web enables modeling the structure (i.e., objects and their relations) and behavior (i.e., processes and their links to objects) of the code migration process in a single view.

Modeling Events in Object-Process Methodology and in Statecharts
Iris Reinhartz-Berger, Arnon Sturm and Dov Dor, Technion.

Abstract: Complex systems are often reactive, i.e., they continuously respond to external and internal stimuli (events) and may have time constraints. When modeling such systems, the designer should be able to determine the system's behavior, as well as its flow of control. One common way for expressing control flows is via Event-Condition-Action (ECA) rules Error! Reference source not found.. These rules specify for each action (process) its triggering event and its guarding condition. The action is executed when the triggering event occurs, if and only if the guarding condition is fulfilled at that time. In this paper, we specify how two modeling approaches, Statecharts and Object-Process Methodology (OPM), model the ECA paradigm and compare the expressive power of the respective models. We examine the types of supported events, how these event types are integrated into complete system specifications, and what are the potential implications on the code derived from each one of the specifications.

Object Relations and Syntactic Mechanisms in Design Patterns
Uriel Cohen, Technion.

Abstract: One of the main problems of software engineering is to capture the knowledge of expert software designers and engineers in order to allow other less experienced practitioners to apply that knowledge in improving their designs.
Design patterns were invented precisely for this purpose: to explain and record a recurring design in object-oriented programming.

The motivation of this research is the belief that incorporating design patterns into object-oriented languages like C++ or Java as language constructs will make an advance in the problems present with the current approach of design patterns. We investigate the syntactic mechanisms that allow the implementation of design patterns and the relations between the objects that compose them, in order to reach a better understanding of the set of lingual features that would permit to easily implement them.
This will lead to a new classification of design patterns according to criteria based on syntactics rather than semantics.

A taxonomy of patterns is compiled, and a data set of design patterns is selected from this taxonomy.
We argue that this data set is comprised of good candidates of becoming object-oriented lingual features.

Composition is a syntactic technique used throughout our data set of design patterns. We claim that composition is the syntactical mechanism most used in our data set.
Several criteria to classify and implement composition are suggested, and studied on our data set of design patterns.

We postulate that the knowledge of design patterns should be incorporated into object-oriented languages using the analyzed syntactical mechanisms.