Lo sucedido con el proyecto.
Lo que nos falto con todo este proyecto, fue mas trabajo en equipo dejar de lado las diferencias que tenemos unos de otros y ser mas comunicativos en los equipos que estaban formados, en mi opinion si hubieramos hablado desde el principio o asistido a reuniones todos los equipo seria otra historia.
Tambien se realizo un cluster con lam e interfaz grafica de manera loca aqui el link en la wiki :
http://elisa.dyndns-web.com/progra/clusterfuncional#Reuni.2BAPM-n_21_de_mayo_2012
Nominaciones:
Pedro Miguel
Juan Carlos
Rafa.
Sistemas Distribuidos y Paralelos
jueves, 24 de mayo de 2012
domingo, 20 de mayo de 2012
Petri Net
Carl Petri established in 1962, a mathematical tool for studying communications with controllers. Some of the most important applications of Petri Nets have been in the modeling and analysis of communication protocols, modeling and analysis of manufacturing systems. In this area, have been used to represent production lines, automated assembly lines, automotive production system, flexible manufacturing systems, systems just-in-time, etc..
Petri Nets are bipartite graphs consisting of three types of objects:
Petri nets are considered a tool for the study of systems. With your help we can model the behavior and structure of a system, and carry the model boundary conditions, which in a real system are difficult to get or very expensive.
The theory of Petri nets has become recognized as an established methodology in the literature of robotics to model flexible manufacturing systems.
Compared with other models of dynamic behavior charts, diagrams and finite state machines, Petri nets offer a way to express processes that require synchronization. And perhaps most importantly, Petri nets can be analyzed formally and learn the dynamic behavior of the modeled system.
To model a system using mathematical representations making an abstraction of the system, this is achieved with Petri nets, which can also be studied as automata and investigate its mathematical properties.
Areas of application
Example:
It has a single line to serve 100 clients. The arrival times of customers are successive values of the random variable t, the service times are given by the random variable ts, and N is the number of servers. This model in its initial state the queue is empty and all servers in standby. The Petri net for this scenario is shown in Figure
The states are labeled with capital letters and lowercase transitions. The labels of the sites will also be used as variables whose values are the tokens.
The edges have labels that could represent the transition functions, which specify the number of tokens removed or added when a transition is enabled.
The state A initially has the arrival of 100 customers, the site B prevents clients come more than once, the site is the row Q by customers when they have to wait to be answered. The state S is where the servers idle waiting for the opportunity to work, and site E counts the number of customers leaving the system. The initial state implies that the sites to have the following values:
The transition model is used for customers entering the system and the transition b models to customers when they are being served.
Classical Petri nets to model states, events, and conditions, synchronization, and parallelism, among other system features. However, Petri networks that describe the real world tend to be complex and extremely large. Furthermore, conventional Petri nets do not allow the modeling of data and time. To solve this problem, many extensions have been proposed. Extensions can be grouped into three categories: time extensions, color and hierarchies.
TIME EXTENSIONS
An important point in real systems is the description of the temporal behavior of the system. Because Casicas Petri nets are not able to manage time in a `` quantitative'' is added to the model the concept of time. Petri nets with time can be divided, in turn, into two classes: time Petri nets deterministic (or regular Petri nets) and Time Petri Nets Stochastic (or stochastic Petri nets).
The family of Petri nets with deterministic time include Petri nets associated with a given trip time in their transitions, places, or arcs.
The family of Petri nets with time include stochastic Petri nets associated with a trip time stochastic transitions, places and arcs.
Because this type of network associated with a delay time to execute a transition enabled, you can not establish a reachability tree because evolution is not deterministic. The analysis methods associated with such networks are Markov chains, where the probability of the emergence of a new state depends only on the previous state.
EXTENSIONS WITH HERARCHIES
The size problem with Petri nets when modeling real systems, can be treated with the use of hierarchical Petri nets. These networks provide, as its name implies, a hierarchy of subnetworks. A subnet is a set of places, transitions, arcs and even subnets. So that the construction of a large system, is based on a mechanism for structuring two or more processes, represented by subnets. Such that, at one level gives a simple description of the processes, and on another level we want to give a more detailed description of their behavior.
Extensions from hierarchies appeared as extensions to colored Petri nets However, we have developed types of hierarchical Petri nets that are not colored
EXTENSIONS WITH COLOR
Colored Petri nets are an extension of Petri nets that has built a modeling language. Colored Petri nets are considered a modeling language developed for systems in which, communication, synchronization and resource sharing are important. Thus, colored Petri nets combine the advantage of Petri nets and the classical languages of high level programming. To make this statement clear, we will list the features of the graphical elements of such networks. Bibliography:
http://homepage.cem.itesm.mx/vlopez/redes_de_petri.htm
http://www.mitecnologico.com/Main/RedesDePetri
Petri Nets are bipartite graphs consisting of three types of objects:
- Places.
- Transitions.
- Oriented arcs
Petri nets are considered a tool for the study of systems. With your help we can model the behavior and structure of a system, and carry the model boundary conditions, which in a real system are difficult to get or very expensive.
The theory of Petri nets has become recognized as an established methodology in the literature of robotics to model flexible manufacturing systems.
Compared with other models of dynamic behavior charts, diagrams and finite state machines, Petri nets offer a way to express processes that require synchronization. And perhaps most importantly, Petri nets can be analyzed formally and learn the dynamic behavior of the modeled system.
To model a system using mathematical representations making an abstraction of the system, this is achieved with Petri nets, which can also be studied as automata and investigate its mathematical properties.
Areas of application
- Data Analysis
- Software design
- Reliability
- Workflow
- Concurrent programming
Example:
It has a single line to serve 100 clients. The arrival times of customers are successive values of the random variable t, the service times are given by the random variable ts, and N is the number of servers. This model in its initial state the queue is empty and all servers in standby. The Petri net for this scenario is shown in Figure
The states are labeled with capital letters and lowercase transitions. The labels of the sites will also be used as variables whose values are the tokens.
The edges have labels that could represent the transition functions, which specify the number of tokens removed or added when a transition is enabled.
The state A initially has the arrival of 100 customers, the site B prevents clients come more than once, the site is the row Q by customers when they have to wait to be answered. The state S is where the servers idle waiting for the opportunity to work, and site E counts the number of customers leaving the system. The initial state implies that the sites to have the following values:
- A = 100
- B = 1
- Q = 0
- S = N
- E = 0
The transition model is used for customers entering the system and the transition b models to customers when they are being served.
Classical Petri nets to model states, events, and conditions, synchronization, and parallelism, among other system features. However, Petri networks that describe the real world tend to be complex and extremely large. Furthermore, conventional Petri nets do not allow the modeling of data and time. To solve this problem, many extensions have been proposed. Extensions can be grouped into three categories: time extensions, color and hierarchies.
TIME EXTENSIONS
An important point in real systems is the description of the temporal behavior of the system. Because Casicas Petri nets are not able to manage time in a `` quantitative'' is added to the model the concept of time. Petri nets with time can be divided, in turn, into two classes: time Petri nets deterministic (or regular Petri nets) and Time Petri Nets Stochastic (or stochastic Petri nets).
The family of Petri nets with deterministic time include Petri nets associated with a given trip time in their transitions, places, or arcs.
The family of Petri nets with time include stochastic Petri nets associated with a trip time stochastic transitions, places and arcs.
Because this type of network associated with a delay time to execute a transition enabled, you can not establish a reachability tree because evolution is not deterministic. The analysis methods associated with such networks are Markov chains, where the probability of the emergence of a new state depends only on the previous state.
EXTENSIONS WITH HERARCHIES
The size problem with Petri nets when modeling real systems, can be treated with the use of hierarchical Petri nets. These networks provide, as its name implies, a hierarchy of subnetworks. A subnet is a set of places, transitions, arcs and even subnets. So that the construction of a large system, is based on a mechanism for structuring two or more processes, represented by subnets. Such that, at one level gives a simple description of the processes, and on another level we want to give a more detailed description of their behavior.
Extensions from hierarchies appeared as extensions to colored Petri nets However, we have developed types of hierarchical Petri nets that are not colored
EXTENSIONS WITH COLOR
Colored Petri nets are an extension of Petri nets that has built a modeling language. Colored Petri nets are considered a modeling language developed for systems in which, communication, synchronization and resource sharing are important. Thus, colored Petri nets combine the advantage of Petri nets and the classical languages of high level programming. To make this statement clear, we will list the features of the graphical elements of such networks. Bibliography:
http://homepage.cem.itesm.mx/vlopez/redes_de_petri.htm
http://www.mitecnologico.com/Main/RedesDePetri
Scalability
Scalability is the ability of a system, network, or process, to handle growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added. An analogous meaning is implied when the word is used in a commercial context, where scalability of a company implies that the underlying business model offers the potential for economic growth within the company.
Scalability, as a property of systems, is generally difficult to define and in any particular case it is necessary to define the specific requirements for scalability on those dimensions that are deemed important. It is a highly significant issue in electronics systems, databases, routers, and networking. A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system. An algorithm, design, networking protocol, program, or other system is said to scale, if it is suitably efficient and practical when applied to large situations (e.g. a large input data set or a large number of participating nodes in the case of a distributed system). If the design fails when the quantity increases, it does not scale.
In information technology, scalability (frequently spelled scaleability) seems to have two usages:
1) It is the ability of a computer application or product (hardware or software) to continue to function well when it (or its context) is changed in size or volume in order to meet a user need. Typically, the rescaling is to a larger size or volume. The rescaling can be of the product itself (for example, a line of computer systems of different sizes in terms of storage, RAM, and so forth) or in the scalable object's movement to a new context (for example, a new operating system).
2) It is the ability not only to function well in the rescaled situation, but to actually take full advantage of it. For example, an application program would be scalable if it could be moved from a smaller to a larger operating system and take full advantage of the larger operating system in terms of performance(user response time and so forth) and the larger number of users that could be handled.
Bibliography:
http://en.wikipedia.org/wiki/Scalability
http://searchdatacenter.techtarget.com/definition/scalability
Scalability, as a property of systems, is generally difficult to define and in any particular case it is necessary to define the specific requirements for scalability on those dimensions that are deemed important. It is a highly significant issue in electronics systems, databases, routers, and networking. A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system. An algorithm, design, networking protocol, program, or other system is said to scale, if it is suitably efficient and practical when applied to large situations (e.g. a large input data set or a large number of participating nodes in the case of a distributed system). If the design fails when the quantity increases, it does not scale.
In information technology, scalability (frequently spelled scaleability) seems to have two usages:
1) It is the ability of a computer application or product (hardware or software) to continue to function well when it (or its context) is changed in size or volume in order to meet a user need. Typically, the rescaling is to a larger size or volume. The rescaling can be of the product itself (for example, a line of computer systems of different sizes in terms of storage, RAM, and so forth) or in the scalable object's movement to a new context (for example, a new operating system).
2) It is the ability not only to function well in the rescaled situation, but to actually take full advantage of it. For example, an application program would be scalable if it could be moved from a smaller to a larger operating system and take full advantage of the larger operating system in terms of performance(user response time and so forth) and the larger number of users that could be handled.
Bibliography:
http://en.wikipedia.org/wiki/Scalability
http://searchdatacenter.techtarget.com/definition/scalability
jueves, 17 de mayo de 2012
Semana 15
La aportacion en esta semana fue trabajar mas sobre pvm con mis compañero Alejandro, Esteban y jonathan
aqui el link:
http://elisa.dyndns-web.com/ progra/SistemasParalelos/XPVM
mis nominacion son para:
Esteban sifuentes
Pedro Miguel
aqui el link:
http://elisa.dyndns-web.com/
mis nominacion son para:
Esteban sifuentes
Pedro Miguel
sábado, 12 de mayo de 2012
Grid Computing
Grid computing is an innovative technology that enables a coordinated use of all resources (including computing, storage and specific applications) that are not subject to centralized control. In this sense it is a new form of distributed computing, where resources can be heterogeneous (different architectures, supercomputers, clusters ...) and are connected by wide area networks (for example Internet). Developed in scientific fields in the early 1990s, they entered the commercial market following the idea of so-called Utility computing is a major revolution. The term grid refers to an infrastructure that allows integration and collective use of high performance computers, networks and databases that are owned and managed by different institutions.
Called grid computing system for sharing resources distributed not geographically centered for solving large scale. Shared resources can be computers (PCs, workstations, supercomputers, PDAs, laptops, phones, etc.), software, data and information, special tools (radio, telescopes, etc..) Or people / employees.
Grid computing offers many advantages over alternative technologies. The power that offer a multitude of networked computers using grid is virtually unlimited, as well as providing seamless integration of heterogeneous systems and devices, so that the connections between different machines will not generate any problems. This is a highly scalable, powerful and flexible, thus bypassing problems of lack of resources (bottlenecks) and never becomes obsolete, because of the possibility of modifying the number and characteristics of its components.
These resources are distributed across the network transparently but keeping safety guidelines and policies for managing both technical and economic. So, your goal will be to share a number of online resources in a uniform, secure, transparent, efficient and reliable, providing a single point of access to a set of geographically distributed resources in different management domains. This can lead us to believe that Grid computing enables the creation of virtual enterprises. It is important to know that a grid is a distributed set of machines that help improve the software work on heavy.
Some features of grid computing are:
- Use of work on network architecture and well-defined protocols.
- You can group virtual teams around the world.
- Requires access controls and security.
- Various institutions can pool their resources to get results.
- The Grid teams located less used and assigned tasks.
- Not all problems can be solved with GRID.
Moreover, this technology gives companies the benefit of speed, giving it a competitive advantage, thereby providing improved times for the production of new products and services.
Makes it easy to share, access and manage information through collaboration and operational flexibility, combining technological resources not only diverse, but also people and different skills. Another aspect is that it tends to increase productivity by giving end users access to computing resources, storage and data they need, when needed.
With respect to security in the grid, it is supported with the "intergrids" where that security is the same Lan network offering which is used on grid technology.
The parallel may be seen as a problem, since a parallel machine is very expensive. But if we have availability of a set of heterogeneous machines small or medium size, whose combined computing power is sufficiently large, distributed systems that would generate very low cost, high computing power.
Grid computing needs to maintain its structure, different services such as Internet connections, 24 hours, 365 days, bandwidth, server capacity, security, VPN, firewalls, encryption, secure communications, security policies, ISO and some more features ... Without all these functions and features can not speak of Grid Computing.
Fault tolerance means that if one of the machines that make up the grid collapses, the system recognizes it and the task is forwarded to another machine, which is fulfilled in order to create flexible and robust operational infrastructures.
Applications of Grid Computing
Currently, there are five general applications for Grid Computing:
Super computing.
Applications are those whose needs can not be met in a single node. The needs occur at certain instants of time and resource intensive.
Real-time distributed systems.
These are applications that generate a flow of high-speed data to be analyzed and processed in real time.
Specific services.
This does not take into account the computing power and storage capacity, but the resources that an organization can be considered unnecessary. Grid represents the organization resources.
Data intensive process.
These are applications that make heavy use of storage space. Such applications go beyond the storage capacity of a single node and the data are distributed throughout the grid. In addition to the benefits due to the increase of space, the data distribution along the grid allows access to them in a distributed manner.
Collaborative virtual environments.
Area associated with the concept of tele-immersion, so as to use the vast resources of grid computing and distributed nature to produce distributed 3D virtual environments.
Bibliography:
http://es.wikipedia.org/wiki/Computaci%C3%B3n_grid
http://www.textoscientificos.com/redes/computacion-grid/ventajas-desventajas-aplicaciones
Called grid computing system for sharing resources distributed not geographically centered for solving large scale. Shared resources can be computers (PCs, workstations, supercomputers, PDAs, laptops, phones, etc.), software, data and information, special tools (radio, telescopes, etc..) Or people / employees.
Grid computing offers many advantages over alternative technologies. The power that offer a multitude of networked computers using grid is virtually unlimited, as well as providing seamless integration of heterogeneous systems and devices, so that the connections between different machines will not generate any problems. This is a highly scalable, powerful and flexible, thus bypassing problems of lack of resources (bottlenecks) and never becomes obsolete, because of the possibility of modifying the number and characteristics of its components.
These resources are distributed across the network transparently but keeping safety guidelines and policies for managing both technical and economic. So, your goal will be to share a number of online resources in a uniform, secure, transparent, efficient and reliable, providing a single point of access to a set of geographically distributed resources in different management domains. This can lead us to believe that Grid computing enables the creation of virtual enterprises. It is important to know that a grid is a distributed set of machines that help improve the software work on heavy.
Some features of grid computing are:
- Use of work on network architecture and well-defined protocols.
- You can group virtual teams around the world.
- Requires access controls and security.
- Various institutions can pool their resources to get results.
- The Grid teams located less used and assigned tasks.
- Not all problems can be solved with GRID.
Moreover, this technology gives companies the benefit of speed, giving it a competitive advantage, thereby providing improved times for the production of new products and services.
Makes it easy to share, access and manage information through collaboration and operational flexibility, combining technological resources not only diverse, but also people and different skills. Another aspect is that it tends to increase productivity by giving end users access to computing resources, storage and data they need, when needed.
With respect to security in the grid, it is supported with the "intergrids" where that security is the same Lan network offering which is used on grid technology.
The parallel may be seen as a problem, since a parallel machine is very expensive. But if we have availability of a set of heterogeneous machines small or medium size, whose combined computing power is sufficiently large, distributed systems that would generate very low cost, high computing power.
Grid computing needs to maintain its structure, different services such as Internet connections, 24 hours, 365 days, bandwidth, server capacity, security, VPN, firewalls, encryption, secure communications, security policies, ISO and some more features ... Without all these functions and features can not speak of Grid Computing.
Fault tolerance means that if one of the machines that make up the grid collapses, the system recognizes it and the task is forwarded to another machine, which is fulfilled in order to create flexible and robust operational infrastructures.
Applications of Grid Computing
Currently, there are five general applications for Grid Computing:
Super computing.
Applications are those whose needs can not be met in a single node. The needs occur at certain instants of time and resource intensive.
Real-time distributed systems.
These are applications that generate a flow of high-speed data to be analyzed and processed in real time.
Specific services.
This does not take into account the computing power and storage capacity, but the resources that an organization can be considered unnecessary. Grid represents the organization resources.
Data intensive process.
These are applications that make heavy use of storage space. Such applications go beyond the storage capacity of a single node and the data are distributed throughout the grid. In addition to the benefits due to the increase of space, the data distribution along the grid allows access to them in a distributed manner.
Collaborative virtual environments.
Area associated with the concept of tele-immersion, so as to use the vast resources of grid computing and distributed nature to produce distributed 3D virtual environments.
Bibliography:
http://es.wikipedia.org/wiki/Computaci%C3%B3n_grid
http://www.textoscientificos.com/redes/computacion-grid/ventajas-desventajas-aplicaciones
jueves, 10 de mayo de 2012
Semana 14
La aportacion de la semana fue trabajar con mis compañeros alejandro y jonathan con lo de PVM
link:
wiki
Nomino a
Pedro: http://pedrito-martinez.blogspot.mx/2012/05/benchmarks.html
Osvaldo: http://4imedio.blogspot.mx/2012/05/reporte-semana-14-paralelos.html
por estas aportaciones.
link:
wiki
Nomino a
Pedro: http://pedrito-martinez.blogspot.mx/2012/05/benchmarks.html
Osvaldo: http://4imedio.blogspot.mx/2012/05/reporte-semana-14-paralelos.html
por estas aportaciones.
Benchmark
In computer terms a benchmark is an application designed to measure the performance of a computer or any element thereof. For this purpose, the machine undergoes a series of workloads or stimuli of different type with the intention of measuring its response to them. This can be estimated under what tasks or stimuli a given computer behaves in a reliable and effective or otherwise shown inefficient. This information is very useful when selecting a machine to perform specific tasks in the post process and creation of audiovisual products, choosing the most appropriate for a given process.
The benchmark is also useful to estimate the level of obsolescence of a system or what technical performance can be improved through updates.
On the other hand, the benchmark can provide us all the technical specifications along with a computer's performance to different stimuli allowing for comparisons between different systems in line with their technical specifications and performance. The comparisons are useful in determining which technical characteristics are ideal for optimal performance in a specific task. A comparison between multiple computers from different manufacturers (with different specifications) allows us to determine a priori which are more suitable for certain applications and which ones are better for others.
Early in the 90's definition requires a supercomputer to produce comparable statistics. After experimenting with various metrics based on the number of processors in 1992, the idea of using a list of production systems as a basis for comparison, at the University of Mannheim.
A year later, Jack Dongarra joins the project with the Linpack benchmark. In May 2003, establishing the first trial based on data published on the Internet:
- Statistics Series supercomputers at the University of Mannheim (1986-1992)
- List of sites world's most powerful computer, maintained by Gunter Ahrendt
- Lots of information gathered by David Kahaner
To measure the power of systems using the HPL benchmark, a portable version of the Linpack benchmark for distributed memory computers.
Note that the list is not based on GRID computing systems or the MDGRAPE-3 supercomputer, which reaches a Petaflop being more powerful than any of the systems included in the list, you can not run benchmarking software used to not be a general-purpose supercomputer.
All lists published since the beginning of the project are posted on the website of the project, so it makes no sense to copy that information elsewhere.
Note that the list is not based on GRID computing systems or the MDGRAPE-3 supercomputer, which reaches a Petaflop being more powerful than any of the systems included in the list, you can not run benchmarking software used to not be a general-purpose supercomputer.
All lists published since the beginning of the project are posted on the website of the project, so it makes no sense to copy that information elsewhere.
Machines that have occupied the number 1
K_computer Fujitsu (Japan, June 2011 - present)
NUDT Tianhe-1A (China, November 2010 - June 2011)
Cray Jaguar (USA, November 2009 - November 2010)
IBM RoadRunner (USA, June 2008 - November 2009)
IBM Blue Gene / L (U.S., November 2004 - June 2008)
NEC Earth Simulator (Japan, June 2002 - November 2004)
Suscribirse a:
Entradas (Atom)