Two-dimentional inversion of gravity data by defining primary points or strikes of an underground anomaly
Behnam
Mahdioghli Lahroodi
author
Vahid
Ebrahmzadeh Ardestani
author
text
article
2015
per
Considering the development of different methods of modeling, the geophysicists are looking for methods that firstly can model the complex geological structures and secondly they are not time consuming. In this study, we aimed to model the gravity anomalies by defining the prior information. We studied a different method for interpreting 2D gravity anomalies produced by multiple and complex gravity sources separated from each other by short distances. This is an approach combining the best features of the automatic inversion and forward modeling. The assumed interpretation model is a grid of 2D prisms placed side by side; the density contrasts of this grid are the parameters to be determined. The interpreter designates the outlines of the gravity sources in terms of geometric elements (line segments and points) and the density contrast associated with the geometric elements defining each gravity source structure (this amounts to specifying the supposed density contrast for each source). The method then estimates the density-contrast distribution that fits the observed anomaly within the measurement errors and represents compact gravity sources closest to the specified geometric elements. The user can either accept the interpretation or adjust the gravity-source structure, changing the position of the geometric elements and/or the density contrast associated with each of the elements and begin the inversion once again. In fact,we estimate the geometrical shape of the underground anomalies. The interpreter defines some initial points and line segments with predefined density contrasts and then the modeling process estimates the final shape and density contrasts in the surrounding of the points and line segments. A computer code in MATLAB was written for these targets by the authors.
The advantage of this method was to hold the data fit bythe user.This capability helps the userconsiderthe noise existing in the data to achieve a more acceptable model.This was shown for synthetic models. Also, the method was tested on simple and complex synthetic models in two states, i.e. without noise and with random noise. Another advantage of this method was the ability of the inversion of complex geometric models andthe lack of sensitivity of the method to points and line segmentsplaced as the prior informationgiven.
Methodâs practical application was shown by applying it to two sets of gravity data from different geologic settings. (1) Modeling a karstic cavity in the Havasan region in Ilam Province, Iran; (2) Modeling a Barite ore body in the Abadeh region in Fars Province, Iran. A Scintrex CG3 gravimeter with a sensitivity of 5 microGal was used for micro-gravity observations in the selected areas. Station altitudes were measured with a total station model Leica Tc 407 with an accuracy of 1-5mm in horizontal and vertical coordinates. The residual gravity grids were obtained using the Geosoft software.
For both regions, the forward modeling method was done. Then the inversion method (the method used in this study) was applied to the gravity data and compared with the forward modelingmethod. Indeed, this inversion method offered a better data fit and more acceptable model. When the user has information about approximate anomaly locations and density contrast, this method is one of the best choicesfor the 2D gravity modeling.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33569_559101533572316d273a092e0ba0db42.pdf
New approach for vertical deflection determination using digital Zenith cameras
Abbas
Abedini
author
Saeed
Farzaneh
Ø´Ûراز
author
text
article
2015
per
Celestial positioning has been used for navigation purposes for many years. Stars as the extra-terrestrial benchmarks provide a unique opportunity in absolute point positioning. However, astronomical field data acquisition and data processing of the collected data is very time consuming. The advent of the Global Positioning System (GPS) has nearly made the celestial positioning system obsolete. The new satellite-based positioning system has been very popular since it is quite efficient and convenient for many daily life applications. Several years ago, the determination of vertical deflections (the angle between the true zenith (plumb line) and the line perpendicular to the surface of the reference ellipsoid) often required 2-3 h or even more, using conventional astrogeodetic instrumentation such as analogue zenith cameras or astrolabes.The invention of the electro-optical devices at the beginning of the 21st century was really a rebirth in geodetic astronomy. Today, digital cameras with relatively high geometric and radiometric accuracy have opened a new insight into satellite attitude determination and the study of the Earth's surface geometry and physics of its interior, i.e. computation of astronomical coordinates and the vertical deflection components. The Digital Zenith Camera System consists of a zenith camera equipped with a CCD imaging sensor, which is used for the determination of astronomical latitude, , (the angle between the plane of the earth's equator and the plumb line (direction of gravity) at a given point on the earth's surface) and longitude,, (the angular distance of a point on the celestial sphere from the great circle perpendicular to the ecliptic at the point of the vernal equinox, measured through 360° eastward parallel to the ecliptic). By means of the positions of stars on the celestial sphere which are defined by equatorial coordinates . The equatorial coordinates can be linked to the astronomical parameters by GAST (Greenwich Apparent Siderial Time):
The second component is a GPS receiver which is used for time tagging of the exposure epochs as well as for determining geodetic latitude and longitude  of the camera. Vertical deflections at the surface can be obtained by combining both components:
In the automatic star detection, high precision and reliable in the extraction of the star centers from the captured images and relating them to the astronomical coordinates is the most important point.In this study, the star's centers were extracted by an advanced image processing technique with sub-pixel precision. Relating the parameters of the presented technique to the star's Mag was one of its exclusive properties. Using the theory of coherent motion, the corresponding stars were detected first, and the outliers were removed by the MSAC algorithm afterwards. The suggested method was applied to the images taken by a TZK2-D camera which consisted of two major components: A zenith camera equipped with a CCD was used for the determination of the plumb line and a GPS-receiver for precisetiming and measurement of the ellipsoidal coordinates. Validations showed that the adopted approach in this study is highly capable of yielding reliable results.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33570_076e927fe4911d6b9e95ef36d95c78ce.pdf
An improvement of 2-D inversion of MT data using automatic selection methods for regularization parameter
reza
Ghaedrahmati
داÙشگا٠ÙرستاÙ
author
ali
Moradzadeh
author
nader
Fathianpour
author
Seong
kon Lee
author
text
article
2015
per
The inverse problem is usually ill-posed which means that there is more than one model which fits the noisy data. Such problems are solved through regularization and a major computational cost arises because the regularization parameter is not known a priori
   Undoubtedly, the most common and well-known form of regularization is the one known as Tikhonov regularization (Tikhonov and Arsenin, 1977). In Tikhonov regularization, the regularization parameter (l) is a parameter that acts to trade off between minimizing the norm of data misfit and the norm of the model. A good regularization parameter should yield a fair balance between the misfit and the model norm in the regularized solution.
   One of the main problems in the solution of inverse problem in terms of Tikhonov regularization is that the regularization parameter is unknown. The inversion algorithms for selection of regularization parameter can be roughly divided into two groups. In one group, the regularization parameter is estimated by a fixed value and the problem is solved by the fixed regularization parameter during the inversion process. In another group, the regularization parameter is estimated at each iteration of inversion. The first method which uses a fixed regularization parameter, i.e. the minimization of Tikhonov regularization (or functional of inverse problem) is done a few times. Each minimization is solved with an experimental regularization parameter and a solution is obtained. If the solution is judged to be satisfactory by some criteria, then the inverse problem is considered to have been solved. In the second inversion group, it is however preferred that the regularization parameter is estimated at each iteration of the inversion. Most of the methods in the second group use the discrepancy principle criterion to choose the regularization parameter. The discrepancy principle method is based on the noise level of the data. Unfortunately for most of the field data the noise is not known.
   In this study, it was attempted to use the modified generalized cross validation (GCV) and L-curve criteria as two automatic selection approaches for regularization parameter in two-dimensional (2-D) magnetotelluric (MT) data inversion. GCV is based on the philosophy that if an arbitrary element of the observations is left out, then the corresponding regularized solution should predict this observation well, and the choice of regularization parameter should be independent of an orthogonal transformation of the observation vector. This leads to choosing the regularization parameter which minimizes the well-known GCV function.
   If solutions of the inverse problem are computed for all values of the regularization parameter, the graph, using logâlog axes, of the misfit versus the model norm tends to have a characteristic âLâ shapecalled L-curve. The optimal regularization parameter corresponds to a point on the curve near the âcornerâ of the L-shaped region. There are several algorithms to find the corner of the L-curve; here the robust adaptive pruning algorithm was used for this purpose.
   The above methods were included in a 2-D magnetotelluric data inversion codes provided by the fourth author (Lee, et al., 2009). The performance of each of the two regularization parameter selection methods is then investigated by the 2-D inversion of synthetic and real MT data sets. The resulting 2-D inverse models produced by the inversion of the synthetic data set using the modified GCV and the L-curve approaches were generally in good agreement with the model from which the data were generated. However, the constructed model from this synthetic data using the modified GCV was slightly better than the model obtained from L-curve method compared to the original synthetic model. This reflected that the values of regularization parameter obtained from the modified GCV routine were more suitable than those obtained from the L-curve method. This is clear in a distinct minimum of the GCV function and slightly indistinct minimum value in the L-curve of each iteration in the 2-D inversion.
   The obtained 2-D geoelectrical models from real MT data set using the modified GCV and L-curve methods have been compared with the model obtained by the inversion of the same MT data set using the active constrain balancing (ACB) method. In a 2-D inversion using the ACB method, the various set of values, minimum and maximum, of regularization parameter were tested for several performances of inversion of the real data to obtain the suitable model with respect to misfit and model norm. The inverse model obtained using the GCV method for the real MT data was well comparable with that obtained using the ACB method. The closeness of the corresponding values of regularization parameter for the modified GCV and ACB methods in this example could also indicate the robustness of modified GCV approach for the inversion of MT data. Although the results for the L-curve were not as well as those obtained using the modified GCV, the obtained results indicated that application of this method combined with an imposed cooling-schedule-type behavior could be efficient for inversion of MT data.
   The computation time was not an issue in this study, but it is important and should be considered. For example, on an Intel Core i5 CPU (2.53 GHz),4 GB RAM computer, the inversions using the modified GCV and the L-curve methods for the real data took 267.4 and 210 sec., respectively, while this computation time for one performance of the inversion using ACB method was 209.8 sec.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33571_e7759bdee49df9d8729606a0eae13b37.pdf
Magnetic field anomaly separation using empirical mode decomposition
ahmad
Moradi Shah Ghariyeh
author
ali
Nejati Kalate
shahrood
author
amin
Roshandel Kahoo
shahrood
author
text
article
2015
per
The geophysicsl potential field separation refers to separation of the regional and local anomalies from the superimposed anomaly. The Empirical Mode Decomposition (EMD) proposed by Norden E Huang is a kind of spatial and temporal filtering process in terms of the signal extremum characteristic scales. It is a new data analysis method suitable for processing non-stationary and non-linear data. Its power to filter and decompose the data has earned it a high reputation in signal processing. Empirical mode decomposition is a time-frequency analysis method which can adaptively decompose complex signals. The decomposed component contains diffrent bands of frequencies from high to low, and the residual value is the signal trend component representing the signal averaged trend, which is similar to the regional anomalies in the geophysical field. The empirical mode decomposition (EMD) method is an algorithm for the analysis of multicomponent signals that breaks them down into a number of amplitude and frequency modulated zero-mean signals, termed intrinsic mode functions(IMFs). An IMF must fulfill two requirements: (1) the number of extrema and the number of the zero crossings are either equal or differ at most by one; (2) at any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero. Based on this theory, applying the EMD to separate the geophysical potential field is proposed in this article. When EMD is used for anomaly separation, the problem is to identify properly which IMFs contain residual characteristics. Certain modes will consist of mainly residual, whereas other modes will contain regional and noise characteristics. Magnetic field anomalies are usually superposed large-scale structures and small-scale structure anomalies. Separation of these two categories of anomalies is the most important step in the data interpretation. Different methods have been introduced for this work, but most of them are the semi-automatic methods; it means that the interpretatorâs opinion can directly affect the results. In this study, EMD method has been used to separate regional and residual magnetic anomalies. EMD decomposition results in what is âresidualâ, which is similar to the regional anomaly of a potential field data. This residual does not require any preset parameters unlike contemporary field separation methods. This automatic method is based on the extraction of the intrinsic oscillatory modes of data. Efficiency of this method has been investigated on both synthetic and real data acquired in North Mahalat area of Markazi Province to study the regional subsurface geology with the purpose of geothermal reserver explorations. Compared to the conventional method of trend analysis, the EMD method is affected by less artificial influence, and we did not need to set any parameters beforehand. Otherwise, it reflected the potential field intrinsic physical characteristics better. Separation results showed that this technique had higher accuracy than conventional methods such as polynomial fitting and had a good consistency with regions geology. Finally, the results of the new method were compared with resultsof the upward continuation filter and we observed that these results were matched with the upward continuation filter.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
46
57
https://www.ijgeophysics.ir/article_33572_9effa4011ef0cc1f09877b044b25e484.pdf
The nonlinear modelling of gravity data using estimated depths from all s-values for each shape factor
Marzie
Valieghbal
author
vahid
Ebrahimzadeh Ardestani
author
text
article
2015
per
An inversion algorithm was developed to estimate the depth and the associated model parameters of the anomalous bodies from the gravity measured data (Essa, 2011) .
   These parameters including geometrical and physical ones are defind as the amplitude coefficient and will be estimated through the method which is defined in this paper. One of the most important parameters is the depth of the causative bodies.
   The problem of depth (z) estimation from the observed data was transformed into a nonlinear equation of the formF(z) = 0. This equation was then solved for z by minimizing an objective functional in the least-squares sense through standard iterative methods. These standard iterative methods can solve the problem very readily and in the shortest time. However other numerical methods can also be used for solving the equation and the more accurate metod gives more precise results. Therefore solving the nonlinear equation is a vital step in obtaining the more precise results.Â
   Using the estimated depth, the amplitude coefficient was computed from the measured gravity data. The method was based on determining the root mean square (RMS) of the depths estimated by using all s-values for each shape factor. The primary shape factors for these simple geometrical shapes are defines as the a priori information and are assumed known before the process. The minimum RMS was used as a criterion for estimating the correct shape and depth of the buried structure. When the correct shape factor was used, the RMS of the depths is less than the RMS computed using wrong shape factors. These correct shape factors are actually estimated through the method and are different from the prior ones and reflect the closest shape to the real shape of the subsurface anomaly.
   In other words, the RMS of the correct shape factor is the least one.The proposed approach was applicable to a class of geometrically simple anomalous bodies, such as the semi-infinite vertical cylinder, the horizontal cylinder and the sphere which can simulate the shape of the most causative bodies. The method is tested for synthetic models with and without random noise. The method gives precise results for synthetic models contaminated by 5 to 10 per cent random noise which is quite acceptable and promising.
   This technique was also successfully applied to real data for mineral exploration. The applied real data belongs to an area with hilly topography located in the Fars Province close to the Abadeh city where the barite deposite is under exploration.
   The method is used for a profile of real data that is provided from the residual anomalies and passed from the main detected positive anomaly in the area. It was found that the estimated depths and the associated model parametrers were in good agreement with the results obtained through Euler method and drilling.
   The simple equations of the method and the precise results show its usefulness for obtaining the unknown parameters of causative bodies in gravity data interpretation.Therefore, the method is quite promising in obtaining the unknowmn parameters for different causative bodies and specially in cases that the shape of the anomaly is close to sphere and cylinder . This is usually the case in ore bodies detection and delineation.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33573_95ff02d3bfdab25f3ebc0b76c0ded9b8.pdf
Application of principal component analysis to meteorological data in ANN input selection
Mahshid
Kaviani
author
Seyed Majid
MirRokni
author
text
article
2015
per
Properly intelligent "input selection" tailored to the target using an appropriate method is the first âstep in the design of Artificial Neural Network (ANN) for prediction. ANN architecture is not predetermined; the weights are determined based on input data during the training process. Therefore, when input data is richer, ANN will be better trained and will have a better performance in predicting meteorological parameters. To solve nonlinear equations governing atmospheric motions, for which no general solutions are known, meteorologists have to use the appropriate approximations for prediction. Using the ability of the ANN to consider nonlinear effects, meteorologists will be able to predict most of meteorological parameters without considering the nonlinear equations governing atmospheric motions. There are two approaches for selecting the appropriate input data. In the first approach, time series of the desired parameter, ANN target, such as temperature, relative humidity, pressure, and wind speed from the previous years are used, while in the second approach, the parameters that have a nonlinear or linear relationship with the ANN target are used. Due to a large volume of input data, errors of measurement, the presence of unusual dataand correlation between input variables, in the case of second approach, error increases and prediction accuracy decreases. In most cases, due to the lack of detailed information concerning the data, the trial and error method has to be used to select a proper combination of input data and elimination of unusual data.The trial and error methodis one of the easiest methods for solving problems. Since in this method the governing relationships among the parameters are not considered, the solutions may not fit in the physical situation. In the present research, to avoid using thetrial and error method, we use Principal Component Analysis (PCA) in order to determine the detailed information concerning the details of the input data. PCA has several abilities such as reduction of dimensions of data, extraction of variability modes of data, eliminating the correlation between raw data, and deletion of unusual data. These abilities can be used in various applications. For example, it is possible to reduce dimension data using PCA when we deal with a large volume of raw data. In fact, we achieve simultaneously three targets by using PCA. The first target is reduction of dimensions of the data; therefore, the training process of ANN performs better than the case when we use raw data. The second and the third targets are the extraction of variability modes and deletion of unusual data; thus ANN does not deviate and overtraining does not occur. In our accompanying research presented at the First Computational Physics Conference in 20â22 January 2014, we demonstrated that these abilities of PCA were very important in properly intelligent input selection tailored to the target. Meteorological parameters associated with temperature to determine appropriate parameters for predicting the average daily temperature in 2009 in Yazd synoptic station in a 29-year period (1980 to 2008) has been analyzed by using PCA. The results showed that using numerous capabilities of PCA, a correct, intelligent input selection appropriate for the ANN target without using the trial and error methods is possible.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33574_4d50037b87d2ce3ccb1c88f5bed26f0c.pdf
Improving the prediction of reservoir porosity through a combination of iso-frequency component, instantaneous bandwidth and time gain seismic attributes: A case study on an oil field at Persian Gulf
ali
Hamidi Habib
author
Mohammad Ali
Riahi
author
text
article
2015
per
During the last decade, there has been an increasing interest in the use of attributes derived from 3-D seismic data to define reservoir physical properties such as porosity and fluid content. Therefore, significant advances in the study and application of expert systems in the petroleum industry is needed so that we are able to use such attributes in reservoir characterizations. Establishment of an intelligent formulation between two sets of data (inputs/outputs) has been the main topic of such studies. One such topic, of great interest, was to characterize how 3D seismic data can be related to lithology, rock types, fluid content, porosity, shear wave velocity and other reservoir properties. Petrophysical parameters, such as water saturation and porosity, are very important data for reservoir characterization. So far, several researchers have worked on predicting them from seismic data using statistical methods and intelligent systems (Russell et al., 2002; Russell et al., 2003; Chopra andMarfurt, 2006).
Two sources of information are commonly available for structural modeling and reservoir characterization. These data are well log data (depth data) from wells and geophysical measurements from seismic surveys, which are often difficult to integrate.While the well data provide the most accurate measurements of depths, there are rarely enough wells to permit an accurate appraisal from well data alone. On the other handthe seismic data is generally less precise but more abundantThe Main purposeof this study was to enhance the  characterization of subsurface reservoirs by improving the prediction of porosity through a combination of reservoir geophysics (seismic attributes) and well logs data.First, for statistical determination of reservoir parameters seismic attributes were combined by using the classical techniques of multivariate statistics and more recent methods of neural network analysis were developed.However, there were important questions to answer: Which attributes had to be combined to estimate the porosity? How the best attributes were selected to achieve the goal? Were all the attributes used in different combination methods? Was there any software that contains all attributes relevant to the  petrophysical parameters? To answer these questions, it should be noted that, generally speaking, conventional attributes which exist in any software were used for these ideas but each software was developed for specific tasks  with specific attributes. Therefore, integration of different attributes from different softwares will improve process of estimation of petrophysical parameters. We used two very developed and famous softwares and their attributes for estimation of porosity. During the usage of these software programs, we found that, iso-frequency component, instantaneous bandwidth and time gain had more relation with porosity. The mentioned attributes do not exist in Hampson Russel software as main software for reservoir characterization. Then these attributes beside many other attributes extracted from the Petrel software were used in a different process of combination of attributes to estimate the porosity at well locations. For this study, well logging and seismic data were used in order to estimate the porosity in an Iranian oil field.  At the first step, an inversion was carried out on seismic data and well logs. Subsequently, seismic attributes were extracted from the mentioned data by mathematical algorithms. Next, the extracted seismic attributes were combined using a step by step regression algorithm. In next stage, we determined a relationship between a set of seismic attributes and a reservoir parameter such as porosity in well locations by using a neural network, and then this relationship was used to calculate reservoir parameters from sets of appropriate seismic attributes throughout a seismic volume. In this study, firstly existence attributes in Hampson-Russell software with well data were used for porosity estimation. At this stage, the porosity was estimated with good accuracy. Further, to improve the estimation of petrophysical parameters, other seismic attributes from the Petrel software related to the petrophysical parameter were extracted. Then, these attributes with associated attributes available in Hampson-Russell software were used in the estimation of porosity. At this stage, the results were better than before. During this study, the best attributes that were related to reservoir characteristics from different software were used and the best combination of attributes for porosity estimation was investigated with using multilinear regression and different neural network methods
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33575_d9a8ee4ea975b95e1449c9b7202ac096.pdf
Evaluation of spectral accelerations for Isfahan region and comparison of the results with the spectral acceleration of Iranian Building Code (Standard No. 2800) and IBC
Seyed Hadi
Dehghan-Manshadi
author
Noorbakhsh
Mirzaei
author
Morteza
Eskandari-Ghadi
author
text
article
2015
per
The spectral acceleration, SA, is used as the most commonly tool for the building response analysis. In many international or national engineering standards/codes such as International Building Code (IBC) and Iranian Building Code (standard 2800), the buildings are desiged on the basis of spectral acceleration method. This method is based on orthogonal functions and has received much attention because of its simplicity. The target of this study was to estimate the spectral accelerations of ground motions due to earthquakes and compare them with the results of the standard No. 2800 and IBC in Isfahan and the adjoining regions (the area between 49.5-54 E and 31-34.1 N). The spectral accelerations, which are the input data for building design, are determined via spectral attenuation relationships by multiplying the response factor given in Iranian Building Code (standard 2800) to the Peak Ground Acceleration (PGA) determined from attenuation relationships. To this end, first, the seismicity parameters in the interest region for each seismotectonic province were calculated using a unified, homogenized and a complete catalog in the method proposed by Kijko and Sellevoll (1992), in which one can consider magnitude uncertainty and completeness of data in calculations. Geological maps with scales of 1:100000 and 1:250000 were used to provide the fault map of this region. We determined several probable faults in the interest region to help us to introduce the potential seismic sources more precisely. Based on the fault maps, the potential seismic sources of the interest region were determined (twelve and six potential seismic sources for Zagros and Central-East Iran provinces, respectively). Then, the maps of the maximum and the spectral acceleration seismic zonation were prepared using a modified probabilistic approach (Shi et al., 1992) as well as the spectral attenuation relationships of Campbell and Bozorgnia (2003) and also Ambraseys et al. (2005). In the modified probabilistic approach, the concept of spatial distribution function is introduced. By calculating the spatial distribution function, the contribution to annual mean occurrence rate of seismotectonic province is made for each potential seismic source. In other words, a spatial distribution function characterizes the seismicity differences among potential seismic sources. The spectral acceleration seismic zonation maps are produced for Peak Ground Acceleration (PGA) and periods of 0.2, 0.4, 0.6 and 1 seconds using theEZ-FriskTM computer program. By multiplying the spectral response given in the standard 2800 to the zonation map of PGA, zoning maps for other periods were obtained based on the spectral acceleration and compared with the results of the direct estimation method. Macrozonation probabilistic seismic hazard maps of the interest region for 10% and 63% probability of exceedance were produced for each attenuation relations. It was shown that the maximum horizontal acceleration in Isfahan city for a 10% exceedance in 50 years, via Campbell and Bozorgnia (2003) and Ambraseys et al. (2005) relations was equal to 0.1g and 0.19g, respectively. As the spectral acceleration curve for the Isfahan city showed, the largest horizontal acceleration using Campbell and Bozorgnia (2003) relation obtained as 0.35g in the period 0.1 sec, while with the use of the Ambraseys et al. (2005) relations, it was obtained as 0.61g at period of 0.11 sec. It means that the frequency content of acceleration derived from both attenuation relationships was the same. In addition, a comparison between the spectral acceleration obtained via attenuation relationships and the one derived from the IBC and standard 2800 of Iran in different periods, exclusively for the city of Isfahan showed that up to 0.1 sec, the spectral acceleration of standard 2800 of Iran and the values obtained from Ambraseys et al. (2005) relations were substantially close to each other. However, the spectral accelerations derived from Campbell and Bozorgnia (2003) relations have a lower value than those obtained based on standard 2800 and Ambraseys et al. (2005) relations. Considering these results as well as the less involvement of Iranian event data in Campbell and Bozorgnia (2003) attenuation relations than Ambraseys et al. (2005), the second one was more reliable at the interest region of this study. The spectral acceleration obtained from standard No. 2800 of Iran for the periods larger than 0.1 sec showed some values more than those obtained from other methods. According to the comparison made in the present study, this discrepancy may be due to a lack of sufficient accuracy of the relations proposed in standard 2800 of Iran. All calculations in the present study have been conducted for soil type I (according to the standard 2800).
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33576_9cf0f33ba3b2605ca5b329d41f3866df.pdf
Separation and reconstruction of residual and regional gravity sources in wavelet transform domain
Muhammad Ali
Ahmadi
author
vahid
Ebrahimzadeh Ardestani
author
Loghman
Namaki
شرکت Ú©Ûا٠کاÙا٠زÙ
ÛÙ
author
text
article
2015
per
The wavelet transform is used to estimate geometrical parameters of two-dimensional cross sections of gravity sources in which the continuous wavelet transform shows the location of the potential field singularities in a geometrical pattern as a simple cone the apex of which tend to the corners of the source cross section. Within the space-scale framework, the continuous wavelet transform, in special scales domain related to the wavelength of causative body anomaly, the lines formed by joining the modulus maximum of the wavelet coefficients, intersect each other at the position of the point source or along the edges of the anomaly source (multi-scale edge detection method). However, the procedure may fail, since the observed anomalies are superpositions of effects of different sources. Therefore, the total anomaly signal is separated based on dividing its high to low frequencies into several levels. This method was applied to synthetic data of a complex model in which the shallow source, a structure with a triangular cross section is located above the intersection of two trapezoids with infinite lengths which are cross sections of a deep structure. Therefore, shallow and deep effects are often located in low and high levels, respectively. To attenuate the effect of shallow sources, a majority of the wavelet reconstruction coefficients of the signal were muted in low levels. Eventually, the whole of wavelet coefficients reconstructed and filtered anomaly signal due to the deeper sources resulted. Then, the signal was analyzed and the corners of the cross section of the deep source were estimated by the multi-scale edge detection method. Therefore, the effects due to the deeper sources from those of the shallower ones were reconstructed by a joint application of discrete wavelet transform as a powerful tool and continuous wavelet transform. The method was also applied to noisy data (4%).
   The available real data was that of Sardinia (Italy). From a geological point of view, it has a Paleozoic basement, consisting mainly of granitic metamorphic rocks; its western sector is intersected by an N-S trending Oligo-Miocene Rift (the Sardinia Rift) containing the Campidano graben the limits of which are the Gulfs of Oristano and Cagliari extended in the southern part of the island. The upper part of the depression is filled by a Pliocene-Quarternary sequence. In this research, the boundaries and the length position of the graben were estimated by applying the method to a profile consisting of 334 data points with a 0.6 km step, extended to 512 data points in order to avoid end effect of the edges, which was in good agreement with other geological and geophysical interpretations.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33577_16e89088c284c237608ead36c7ad57a6.pdf
Application of the split jet and Rossby-wave breaking indices to study the critical air pollution episodes in Tehran during Nov. and Dec. 2010
Mozhdeh
Hafezi
author
Mozhgan
Rezaeimanesh
author
Alireza
Mohebalhojeh
author
Abbas Ali
Ali Akbari Bidokhti
author
Mohammad Ali
Nasr Esfahani
داÙشگا٠شÙرکرد
author
text
article
2015
per
With regard to the adverse effects of air pollution in Tehran on its residents, it has become vital to investigate the meteorological factors that determine conditions favorable for the establishment of critical air-pollution episodes. Among these factors, the large-scale dynamical processes acting in the upper troposphere are particularly important, as they can provide a means of predicting the critical episodes using the medium-range weather forecasts. As a step in exploring such factors, this study focused on some aspects of the large-scale upper-tropospheric flow believed to have a significant impact on low-level flow. The upper-tropospheric jet stream with its possible split and the Rossby-wave breaking as a way of detecting and measuring blocking strength were examined using various diagnostics including the distribution of potential temperature  on the 2 PVU (a potential vorticity unit is equal to ) surface corresponding to the position of the dynamical tropopause. The analysis was carried out for a prolonged, acute episode of air pollution in Tehran.
   For quantification, the previously introduced Split Flow Index (SFI) and the Rossby-wave breaking index, referred to as  for brevity were employed to identify the jet split and blocking formation, respectively. The Global Forecast System (GFS) data consisting of geopotential height, temperature and horizontal velocity components were used for the three-month period of 1 Nov 2010 to 31 Jan 2011. The geopotential height at 500 hPa, and the wind speed, relative vorticity and streamfunction fields at 300 hPa were analyzed to determine the synoptic structure associated with the intense air-pollution in Tehran from 22 Nov to 22 Dec 2010 (the month of Azar 1389 in Iranian calendar). The synoptic structure indicates persistent blockings in the Northern Atlantic, and the formation of deep troughs in the southern Europe together with strong ridges in their downstream sides. The high values of the geopotential height field, the negative relative vorticity and low wind speed in the middle and upper troposphere were the dominant features over Iran.
   To compute the SFI, three regions had to be involved: the subtropical jet (STJ), the polar front jet (PFJ) and the gap between these two (GAP). The SFI was computed by subtracting the mean relative vorticity of GAP from the sum of mean relative vorticities of the STJ and PFJ regions. The sign and magnitude of the SFI were criteria for the occurrence and strength of a split jet. The results for the SFI were generally sensitive to the longitudinal and temporal interval used in computation of the SFI. By considering broader and longer longitudinal and temporal intervals, respectively, more accurate values for the SFI index could be obtained. With regard to the latter sensitivities, it was shown that within the positive values of SFI in the large part of Nov. and Dec. 2010 representing non-split flow, there was the period 9 to 14 Dec. 2010 during which the formation of a transient blocking over central Asia led to negative (positive) values of SFI at the beginning (end) of this period.The blocking formation was associated with the reversal of the meridional gradient of  on the 2 PVU surface. For this reason, the  index was used as a dynamical tool to detect and identify a blocking based on the distribution of  on potential vorticity surfaces. Results showed positive values of  over Iran for a few days in the middle of Azar (early Dec. 2010), indicting the the presence of a transient blocking. There was also a second case of positive values of  from 11 to 16 Dec., which closely followed the negative values of the SFI (split-flow regime) from 9 to 14 Dec. over central Asia.
Iranian Journal of Geophysics
Iranian Geophysical Society
2008-0336
9
v.
1
no.
2015
https://www.ijgeophysics.ir/article_33578_3395801648caf12ffeeae6eeb3ce95f3.pdf