Computer and Information Science Faculty Research
Permanent URI for this collection
For information about the department, please see the department's web site.
Browse
Browsing Computer and Information Science Faculty Research by Issue Date
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item Open Access Cooperative Policy Control for Peer-to-Peer Data Distribution(2010-03-10T22:33:16Z) Anderson, Eric; Li, JunMany network applications (such as swarming downloads, peer-to-peer video streaming and file sharing) are made possible by using large groups of peers to distribute and process data. Securing data in such a system requires not just data originators, but also those “distributors,” to enforce access control, verify integrity, or make other content-specific security decisions for the replicated or adapted data. In this paper, we introduce the concepts of cooperative policy enforcement and request type checking, and propose an implementation framework Q which uses these approaches to secure data in peer-to-peer systems. The Q framework associates every data object with relocatable policy descriptors which distributors can use to determine whether a request for that object should be granted and whether a data transfer meets a request. With minimal changes to the application or the framework, Q can define and enforce arbitrarily sophisticated policies across a wide range of applications. Policies can be written to work across applications, or to include application-specific criteria and behavior. We will also discuss integrating Q with several peer-to-peer applications, including Gnutella, distributed hash tables such as CAN and Chord, peer-to-peer video streaming, HTTP swarming and application-level routing.Item Open Access GLOWS: A high-fidelity worm simulator(2006) Ehrenkranz, Toby; Knickerbocker, Paul, 1980-; Li, Jun; Stafford, ShadThis work presents our GLOWS (Gateway Level Oregon Worm Simulator) simulator, designed to produce realistic worm traffic over a broad range of scenarios. GLOWS simulates the spread of a worm across the Internet and its propagation into a single domain with the goal of capturing the worm traffic that crosses the gateway point separating the monitored domain from the Internet.Item Open Access Revised 3.99 Report on the Algorithmic Language Scheme(University of Oregon, 2004-10) Clinger, William D.; Rees, JonathanThe report gives a defining description of the programming language Scheme. Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have an exceptionally clear and simple semantics and few different ways to form · expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme. The introduction offers a brief history of the language and of the report. The first three chapters present the fundamental ideas of the language and describe the notational conventions used for describing the language· and for writing programs in the language. Chapters 4 and 5 describe the syntax and semantics of expressions, programs, and definitions. Chapter 6 describes Scheme's built-in procedures, which include all of the language's data manipulation and input/ output primitives. Chapter 7 provides a formal syntax for Scheme written in extended BNF, along with a formal denotational semantics. The report concludes with an example of the use of the language and an alphabetic index.Item Open Access Distributed Shared Memory: A Survey of Issues and Algorithms(University of Oregon, 1991-01-07) Lo, Virginia M.; Nitzberg, BillA distributed shared memory (DSM) is an implementation of the shared memory abstraction on a multicomputer architecture which has no physically shared memory. Shared memory is important (as a programming model) not only because of the vast number of existing applications which use it, but also because it is a more appropriate paradigm for certain algorithms. The DSM concept was demonstrated to be viable by Li, in IVY. Recently, there has been a surge of new projects which implement DSM in a variety of software and hardware environments. This paper gives an integrated overview of distributed shared memory. We discuss theoretical lower bounds on the performance of DSM systems, design choices such as structure and granularity, access, coherence semantics, scalability, and heterogeneity, and open problems in DSM. In addition, we describe algorithms used to implement and improve efficiency: reducing thrashing, eliminating false sharing, matching the coherence protocol to the type of sharing, and relaxing the semantics of the memory coherence provided. A spectrum of current DSM systems are used as illustrative examples.Item Open Access Macros That Work(University of Oregon, 1991-01) Clinger, William D.; Rees, JonathanThis paper describes a modified form of Kohlbecker's algorithm for reliably hygienic (capture-free) macro expansion in block-structured languages, where macros are source-tos-ource transformations specified using a high-level pattern language. Unlike previous algorithms, the modified algorithm runs in linear instead of quadratic time, copies few constants, does not assume that syntactic keywords ( e.g. if) are reserved words, and allows local (scoped) macros to refer to lexical variables in a referentially transparent manner. Syntactic closures have been advanced as an alternative to hygienic macro expansion. The problem with syntactic closures is that they are inherently low-level and therefore difficult to use correctly, especially when syntactic keywords are not reserved. It is impossible to construct a pattern-based, automatically hygienic macro system on top of syntactic closures because the pattern interpreter must be able to determine the syntactic role of an identifier (in order to close it in the correct syntactic environment) before macro expansion has made that role apparent. may be viewed as a book-keeping technique for deferring such decisions until macro expansion is locally complete. Building on that insight, this paper unifies and extends the competing paradigms of hygienic macro expansion and syntactic closures to obtain an algorithm that combines the benefits of both . Several prototypes of a complete macro system for Scheme have been based on the algorithm presented here.Item Open Access RAGA: Musical Gantt Charts for Scheduling in Distributed Real Time Systems(University of Oregon, 1991-01) Lo, Virginia M.; Pall, Gurdeep SinghWe propose a simple extension to Gantt charts, called RAGA scores, for use in distributed real time scheduling. RAGA scores use a small set of symbols borrowed from musical notation to enrich the expressive power of the Gantt chart. These symbols enable the timing constraints of real time tasks to be displayed along with the schedule itself. Because of the criticality of timing constraints in real time systems , it is important to be able to visualize these constraints and the schedule simultaneously. The RAGA score is encapsulated as an abstract data type (ADT) that can be used as a tool in the design and visualization of static and dynamic scheduling algorithms, in the display of real time schedules for performance evaluation through simulation, and for historical records of actual schedules for performance evaluation through 'execution traces. 'Item Open Access Parallel Logic Programs on the HP Mayfly(University of Oregon, 1990-12-07) Conery, John S.The Mayfly, a parallel processor being built at HP Labs in Palo Alto, has architectural support for several import.ant. aspects of the OM virtual machine for parallel logic programs. Each node has a coprocessor that is able to relieve the main processor of a significant amount oft.he "housekeeping" work of memory management , task switching, and message handling. This paper describes how the coprocessor implements kernel level functions in OM, with particular attention to the operations that support task switching. The paper includes detailed timing data from a program with interleaved parallel threads to show that while the main processor is busy in one thread the coprocessor can effectively build the context for the next thread .Item Open Access A Determinacy Testing Algorithm for Nondeterminate Flat Concurrent Logic Programming Languages(University of Oregon, 1990-11) Tick, E.; Korsloot, M.This paper describes an algorithm for the code generation of determinacy testing for nondeterminate flat concurrent logic programming languages. Languages such as Andorra and Pandora require that procedure invocations suspend if there is more than one candidate clause potentially satisfying the goal. The algorithm described has been developed specifically for a variant of flat Pandora based on FGHC, although the concepts are general. We have extended Kliger and Shapiro's decision-graph construction algorithm to compile "don't know" procedures which must suspend for nondeterminate goal invocation. The determinacy test is compiled into a decision graph quite different from those of committed-choice procedures, but we argue that in most cases, the same low space complexity is retained.Item Open Access Reliability of Partial k-tree Networks(University of Oregon, 1990-06-08) Mata-Montero, ErickRecent developments in graph theory have shown the importance of the class of partial k- trees. This large class of graphs admits several algorithm design methodologies that render efficient solutions for a large number of problems inherently difficult for general graphs. In this thesis we develop such algorithms to solve a variety of reliability problems on partial k-tree networks with node and edge failures. We also investigate the problem of designing uniformly optimal 2-trees with respect to the 2-terminal reliability measure. We model a. communication network as a graph in which nodes represent communication sites and edges represent bidirectional communication lines. Each component (node or edge) has an associated probability of operation. Components of the network are in either operational or failed state and their failures are statistically independent. Under this model, the reliability of a network G is defined as the probability that a given connectivity condition holds. The l-terminal reliability of G, Rel1 ( G), is the probability that any two of a given set of I nodes of G can communicate. Robustness of a network to withstand failures can be expressed through network resilience, Res( G), which is the expected number of distinct pairs of nodes that can communicate. Computing these and other similarly defined measures is #P-hard for general networks. We use a dynamic programming paradigm to design linear time algorithms that compute Rel1( G), Res( G), and some other reliability and resilience measures of a partial k-tree network given with an embedding in a k-tree (for a fixed k). Reliability problems on directed networks are also inherently difficult. We present efficient algorithms for directed versions of typical reliability and resilience problems restricted to partial k-tree networks without node failures. Then we reduce to those reliability problems allowing both node and edge failures. Finally, we study 2-terminal reliability aspects of 2-trees. We characterize uniformly optimal 2-trees, 2-paths, and 2-caterpillars with respect to Rel2 and identify local graph operations that improve the 2-terminal reliability of 2-tree networks.Item Open Access How to Read Floating Point Numbers(University of Oregon, 1990-06-05) Clinger, William D.Consider the problem of converting decimal scientific notation for a number into the best binary floating point approximation to that number, for some fixed precision. This problem cannot be solved using arithmetic of any fixed precision. Hence the IEEE Standard/or Binary Floating-Point Arithmetic does not require the result of such a conversion to be the best approximation. This paper presents an efficient algorithm that always·finds the best approximation. The algorithm uses a few extra bits of precision to compute an IEEE-conforming approximation while testing an intermediate result to determine whether the approximation could be other than the best. If the approximation might not be the best, then the best approximation is determined by a few simple operations on multiple-precision integers, where the precision is determined by the input When using 64 bits of precision to compute IEEE double precision results, the algorithm avoids higher-precision arithmetic over 99% of the time. The input problem considered by this papet is the inverse of an output problem considered by Steele and White: Given a binary floating point number, print a correctly rounded decimal representation of it using the smallest number of digits that will allow the number to be read without loss of accuracy. The Steele and White algorithm assumes that the input problem is solved; an imperfect solution to the input problem, as allowed by the IEEE standard and ubiquitous in current practice, defeats the purpose of their algorithm.Item Open Access Mapping Divide-and-Conquer Algorithms to Parallel Architectures(University of Oregon, 1990-01-19) Lo, V.; Rajopadhye, S. V.; Gupta, S.; Keldsen, D.; Mohamed, M.; Telle, JanIn this paper, we identify the binomial tree as an ideal computation structure for parallel divide-and-conquer algorithms. We show its superiority to the classic full binary tree structure with respect to speedup and efficiency. We also present elegant and efficient algorithms for mapping the binomial tree to two interconnection networks commonly used in multicomputers, namely the hypercube and the two-dimensional mesh. Our mappings are optimal with respect to both average dilation and link contention. We discuss the practical implications of these results for message-passing architectures using store-and-forward routing vs. those using wormhole routing.Item Open Access OREGAMI: Software Tools for Mapping Parallel Computations to Parallel Architectures(University of Oregon, 1990-01-19) Lo, Virginia M.; Rajopadhye, Sanjay; Gupta, Samik; Keldsen, David; Mohamed, Moataz A.; Telle, JanThe mapping problem in message-passing parallel processors involves the assignment of tasks in a parallel computation to processors and the routing of inter-task messages along the links of the interconnection network. We have developed a unified set of software tools called OREGAMI for automatic and guided mapping of parallel computations to parallel architectures in order to achieve portability and maximal performance from parallel systems. Our tools include a description language which enables the programmer of parallel algorithms to specify information about the static and dynamic communication behavior of the computation to be mapped. This information is used by the mapping algorithms to assign tasks to processors and to route communication in the network topology. Two key features of our system are (a) the ability to take advantage of the regularity present in both the computation structure and the interconnection network and (b) the desire to balance the user's knowledge and intuition with the computational power of efficient combinatorial algorithms.Item Open Access Resilience of Partial k-tree Networks with Edge and Node Failures(University of Oregon, 1989-10-20) Mata-Montero, ErickThe resilience of a network is the expected number of pairs of nodes that can communicate. Computing the resilience of a network is a #P-complete problem even for planar networks with fail-safe nodes. We generalize an O(n)^2 time algorithm to compute the resilience of n-node k-tree networks with fail-safe nodes to obtain an O(n) time algorithm that computes the resilience of n-node partial k-tree networks with edge and node failures (given a fixed k and an embedding of the partial k-tree in a k-tree).Item Open Access Resilience of Partial k-tree Networks(University of Oregon, 1989-10-20) Mata-Montero, ErickThe resilience of a network is the expected number of pairs of nodes that can communicate. Computing the resilience of a network has been shown to be a #P-complete problem for planar networks and to take O(n)^2 time for n-node partial 2-tree networks. We present an O(n) time algorithm to compute the resilience of partial 2-tree networks on n-nodes, and, for a fixed k, an O(n)^2 algorithm to compute the resilience of n-node partial k-tree networks given with an embedding in a k-tree.Item Open Access Negotiation Behavior During Multiple Agent Specification: A Need for Automated Conflict Resolution(University of Oregon, 1989-09-06) Robinson, Willam N.Negotiation is part of specification. During specification acquisition, users negotiate amongst themselves and with analysts. During specification design, designers negotiate amongst themselves and with a project leader. Throughout the specification of a system, people communicate needs and constraints in ways which benefit the project or themselves. The study of such behavior has been valuable to disciplines -such as contract negotiation and office management. However, software engineering has yet to address negotiation behavior. Here we show how negotiation applies to specification. We present automated means to promote integrative behavior during specification. We conclude that formal models of users' desires and resolution methods are necessary for integrative reasoning.Item Open Access Architected Failure Handling for AND-Parallel Logic Programs(University of Oregon, 1989-03-24) Meyer, David M.; Conery, John S.In this paper, we present an architected approach to failure handling for independent AND parallel logic programs. That is, the architecture presented here represents its failure handling algorithm as a sequence of simple abstract machine instructions, rather than as a built-in function. Information about data dependencies is used by a compiler to generate special purpose fail routines on a clause-by-clause basis. We also present two simple optimizations that further specialize the handling of failures for each clause.Item Open Access The Design of Template Immune Networks: Path Immunity(University of Oregon, 1989-03-24) Farley, Arthur M.; Proskurowski, AndrzejA network is specified by a topology definition and a protocol definition. A network's topology, represented as a graph, defines its interconnection structure, while the protocol defines its operational behavior. A template is a connected graph. A topology G is immune to a set of templates T if G remains connected under removal of any imbedding of a single element of T. A network is template immune to a set of templates T if its topology is immune to T and its protocol guarantees that all operative sites can communicate in the presence of possible failures. A network is isolated template immune to a set of templates T if it is immune to multiple imbeddings of elements of T, where each imbedded template does not involve vertices that are neighbors of another imbedded template. We discuss networks that are isolated template immune to simple path templates of length k.Item Open Access Design Issues in a Minimal Language to Support Lisp-based, Object-based, and Rule-based Programming(University of Oregon, 1988-12-30) Fickas, Stephen; Doerry, Eckehard; Meyer, David; Miller, PeterIn this paper we argue for a minimalist view of language design for Expert System environments. In support of our arguments we present MIN, a minimal language which extends the less-is-better philosophy of Scheme to include both object-based and rule-based components. We discuss the strengths and weaknesses of MIN as an embodiment of the minimalist philosophy, and point to other work supporting this view of language design.Item Open Access A Proposed Perspective Shift: Viewing Specification Design as a Planning Problem(University of Oregon, 1988-11-17) Fickas, Stephen; Anderson, JohnWe argue that in certain problem domains, AI planning can be viewed as a foundation for generation, critiquing, and elaboration of a specification. Two specification design projects in our group are used as a focus of discussion .Item Open Access Context, User Models and Interface Design(University of Oregon, 1988-07-12) Douglas, Sarah A.Most of the existing analytical descriptions of users characterize their performance as a function of the cognitive representation of the command sequences of the computer-based task (e.g. Anderson, Farrell, & Sauers, 1984; Card, Moran & Newell, 1983; Polson & Kieras, 1985; Norman, 1986). This is represented as goal-oriented schemata: procedures, plans, or production rules. Thus, the interface designer need only lay out the command sequences adequate to achieve a set of core tasks to make predictions about user behavior. These models are by and large restricted to descriptions of error-free, skilled (expert) performance or error-free learner subsets of expert knowledge. Attempts to extend these analytical models to accommodate learner error soon find themselves coping with the problems of prior knowledge (c.f. Douglas & Moran, 1983; Riley, 1986). That is, elements of performance that are independent of the computer task representation. Additionally, the existing models make no attempt to represent the ongoing interactive nature of human behavior at the interface. This problem of taking into account aspects of human performance at the interface which are independent of the task representation can be called the context problem. In the remainder of this paper I will attempt to delineate the nature of this problem by defining the notion of context, giving examples of context accommodation in interface design, and discussing the practical and theoretical problems that context creates for user models and interface design.