打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
Python h5py.File Examples

  Python h5py.File Examples


The following are 16 code examples for showing how to useh5py.File. They are extracted from open source Python projects.You can click

to vote up the examples you like, or click
to vote down the exmaples you don't like. Your votes will be used in our system to extract more high-quality examples.

You may also check out all available functions/classes of the module h5py , or try the search function

.


Example 1

From project PyMVPA, under directory mvpa2/base, in source file hdf5.py.

Score: 10
def h5save(filename, data, name=None, mode='w', mkdir=True, **kwargs):    """Stores arbitrary data in an HDF5 file.    This is a convenience wrapper around `obj2hdf()`. Please see its    documentation for more details -- especially the warnings!!    Parameters    ----------    filename : str      Name of the file the data shall be stored in.    data : arbitrary      Instance of an object that shall be stored in the file.    name : str or None      Name of the object. In case of a complex object that cannot be stored      natively without disassembling them, this is going to be a new group,      otherwise the name of the dataset. If None, no new group is created.    mode : {'r', 'r+', 'w', 'w-', 'a'}      IO mode of the HDF5 file. See `h5py.File` documentation for more      information.    mkdir : bool, optional      Create target directory if it does not exist yet.    **kwargs      All additional arguments will be passed to `h5py.Group.create_dataset`.      This could, for example, be `compression='gzip'`.    """    if mkdir:        target_dir = osp.dirname(filename)        if target_dir and not osp.exists(target_dir):            os.makedirs(target_dir)    hdf = h5py.File(filename, mode)    hdf.attrs.create('__pymvpa_hdf5_version__', '2')    hdf.attrs.create('__pymvpa_version__', mvpa2.__version__)    try:        obj2hdf(hdf, data, name, **kwargs)    finally:        hdf.close() 

Example 2

From project PyMVPA, under directory mvpa/base, in source file hdf5.py.

Score: 10
def h5save(filename, data, name=None, mode='w', **kwargs):    """Stores arbitray data in an HDF5 file.    This is a convenience wrapper around `obj2hdf()`. Please see its    documentation for more details -- especially the warnings!!    Parameters    ----------    filename : str      Name of the file the data shall be stored in.    data : arbitrary      Instance of an object that shall be stored in the file.    name : str or None      Name of the object. In case of a complex object that cannot be stored      natively without disassembling them, this is going to be a new group,      otherwise the name of the dataset. If None, no new group is created.    mode : {'r', 'r+', 'w', 'w-', 'a'}      IO mode of the HDF5 file. See `h5py.File` documentation for more      information.    **kwargs      All additional arguments will be passed to `h5py.Group.create_dataset`.      This could, for example, be `compression='gzip'`.    """    hdf = h5py.File(filename, mode)    hdf.attrs.create('__pymvpa_hdf5_version__', 1)    try:        obj2hdf(hdf, data, name, **kwargs)    finally:        hdf.close() 

Example 3

From project fos-legacy, under directory examples/neurons, in source file swc2hdf2.py.

Score: 10
def create_hdf(pos, parents, labeling, colors):    # create extendable hdf5 file    f = h5py.File('neurons2.hdf5', 'w')    neurons = f.create_group('neurons')    neurons.create_dataset('position', data=pos)    neurons.create_dataset('localtopology', data=parents.astype( np.int32 ))    neurons.create_dataset('labeling', data=labeling)    neurons.create_dataset('segmentcolors', data=colors)    f.close() 

Example 4

From project fos-legacy, under directory examples/neurons, in source file swc2hdf.py.

Score: 10
def create_hdf(pos, offset, parents, colors):    # create extendable hdf5 file    f = h5py.File('neurons.hdf5', 'w')    neurons = f.create_group('neurons')    neurons.create_dataset('positions', data=pos)    neurons.create_dataset('offset', data=offset)    neurons.create_dataset('parents', data=parents)    neurons.create_dataset('colors', data=colors)    f.close() 

Example 5

From project hifive-master, under directory test, in source file test_fivec_data.py.

Score: 10
def setUp(self):        self.data = h5py.File('test/data/test_import.fcd', 'r')        self.frag_fname = 'test/data/test.frags'        self.count_fname = 'test/data/test.counts'        self.bam_fname1 = 'test/data/test_fivec_1.bam'        self.bam_fname2 = 'test/data/test_fivec_2.bam'     

Example 6

From project hifive-master, under directory test, in source file test_hic_project.py.

Score: 10
def test_hic_project_preanalysis(self):        subprocess.call("./bin/hifive hic-project -q -m 20000 -f 10 -j 30000 -n 5 %s test/data/test_temp.hcp" %                        (self.data_fname), shell=True)        project = h5py.File('test/data/test_temp.hcp', 'r')        self.compare_hdf5_dicts(self.data, project, 'project')     

Example 7

From project hifive-master, under directory test, in source file test_fivec_project.py.

Score: 10
def test_fivec_project_preanalysis(self):        subprocess.call("./bin/hifive 5c-project -q -f 20 %s test/data/test_temp.fcp" %                        self.data_fname, shell=True)        project = h5py.File('test/data/test_temp.fcp', 'r')        self.compare_hdf5_dicts(self.raw, project, 'project')     

Example 8

From project hifive-master, under directory test, in source file test_fivec_binning.py.

Score: 10
def test_generate_heatmap(self):        subprocess.call("./bin/hifive 5c-heatmap -q -b 50000 -t -d fragment -a full %s test/data/test_temp.fch" %                        self.project_fname, shell=True)        heatmap = h5py.File("test/data/test_temp.fch")        self.compare_hdf5_dicts(self.heatmap, heatmap, 'heatmap')     

Example 9

From project fuel-master, under directory fuel/datasets, in source file hdf5.py.

Score: 10
def __init__(self, file_or_path, which_sets, subset=None,                 load_in_memory=False, driver=None, sort_indices=True,                 **kwargs):        if isinstance(file_or_path, h5py.File):            self.path = file_or_path.filename            self.external_file_handle = file_or_path        else:            self.path = file_or_path            self.external_file_handle = None        which_sets_invalid_value = (            isinstance(which_sets, six.string_types) or            not all(isinstance(s, six.string_types) for s in which_sets))        if which_sets_invalid_value:            raise ValueError('`which_sets` should be an iterable of strings')        self.which_sets = which_sets        self._subset_template = subset if subset else slice(None)        self.load_in_memory = load_in_memory        self.driver = driver        self.sort_indices = sort_indices        self._parse_dataset_info()        kwargs.setdefault('axis_labels', self.default_axis_labels)        super(H5PYDataset, self).__init__(**kwargs)     

Example 10

From project fuel-master, under directory tests, in source file test_converters.py.

Score: 10
def setUp(self):        self.h5file = h5py.File(            'file.hdf5', mode='w', driver='core', backing_store=False)        self.train_features = numpy.arange(            16, dtype='uint8').reshape((4, 2, 2))        self.test_features = numpy.arange(            8, dtype='uint8').reshape((2, 2, 2)) + 3        self.train_targets = numpy.arange(            4, dtype='float32').reshape((4, 1))        self.test_targets = numpy.arange(            2, dtype='float32').reshape((2, 1)) + 3     

Example 11

From project fuel-master, under directory tests, in source file test_svhn.py.

Score: 10
def test_svhn():    data_path = config.data_path    try:        config.data_path = '.'        f = h5py.File('svhn_format_2.hdf5', 'w')        f['features'] = numpy.arange(100, dtype='uint8').reshape((10, 10))        f['targets'] = numpy.arange(10, dtype='uint8').reshape((10, 1))        split_dict = {'train': {'features': (0, 8), 'targets': (0, 8)},                      'test': {'features': (8, 10), 'targets': (8, 10)}}        f.attrs['split'] = H5PYDataset.create_split_array(split_dict)        f.close()        dataset = SVHN(which_format=2, which_sets=('train',))        assert_equal(dataset.filename, 'svhn_format_2.hdf5')    finally:        config.data_path = data_path        os.remove('svhn_format_2.hdf5') 

Example 12

From project ILTIS-master, under directory lib/pyqtgraph-master/examples, in source file hdf5.py.

Score: 10
def createFile(finalSize=2000000000):    """Create a large HDF5 data file for testing.    Data consists of 1M random samples tiled through the end of the array.    """        chunk = np.random.normal(size=1000000).astype(np.float32)        f = h5py.File('test.hdf5', 'w')    f.create_dataset('data', data=chunk, chunks=True, maxshape=(None,))    data = f['data']    nChunks = finalSize // (chunk.size * chunk.itemsize)    with pg.ProgressDialog("Generating test.hdf5...", 0, nChunks) as dlg:        for i in range(nChunks):            newshape = [data.shape[0] + chunk.shape[0]]            data.resize(newshape)            data[-chunk.shape[0]:] = chunk            dlg += 1            if dlg.wasCanceled():                f.close()                os.remove('test.hdf5')                sys.exit()        dlg += 1    f.close()     

Example 13

From project dask-master, under directory dask/array, in source file core.py.

Score: 8
def store(sources, targets, **kwargs):    """ Store dask arrays in array-like objects, overwrite data in target    This stores dask arrays into object that supports numpy-style setitem    indexing.  It stores values chunk by chunk so that it does not have to    fill up memory.  For best performance you can align the block size of    the storage target with the block size of your array.    If your data fits in memory then you may prefer calling    ``np.array(myarray)`` instead.    Parameters    ----------    sources: Array or iterable of Arrays    targets: array-like or iterable of array-likes        These should support setitem syntax ``target[10:20] = ...``    Examples    --------    >>> x = ...  # doctest: +SKIP    >>> import h5py  # doctest: +SKIP    >>> f = h5py.File('myfile.hdf5')  # doctest: +SKIP    >>> dset = f.create_dataset('/data', shape=x.shape,    ...                                  chunks=x.chunks,    ...                                  dtype='f8')  # doctest: +SKIP    >>> store(x, dset)  # doctest: +SKIP    Alternatively store many arrays at the same time    >>> store([x, y, z], [dset1, dset2, dset3])  # doctest: +SKIP    """    if isinstance(sources, Array):        sources = [sources]        targets = [targets]    if any(not isinstance(s, Array) for s in sources):        raise ValueError("All sources must be dask array objects")    if len(sources) != len(targets):        raise ValueError("Different number of sources [%d] and targets [%d]"                         % (len(sources), len(targets)))    updates = [insert_to_ooc(tgt, src) for tgt, src in zip(targets, sources)]    dsk = merge([src.dask for src in sources] + updates)    keys = [key for u in updates for key in u]    Array._get(dsk, keys, **kwargs) 

Example 14

From project SparseNet-master, under directory sparsenet/dataset, in source file svhn.py.

Score: 8
def load_extra_torch():        myFile = h5py.File(nn.nas_address()+'/PSI-Share-no-backup/Ali/Dataset/SVHN/torch/svhn_extra_rgb_13.h5', 'r')        # myFile = h5py.File('svhn_old.h5', 'r')        X = np.array(myFile['X'])             # temp = X[10000:10900,:,:,:]        # nn.show_images(temp,(30,30)); plt.show()        T_train_labels = np.array(myFile['T_train_labels'])        T_train_labels = T_train_labels%10        # print T_train_labels[100000:1000010]        print "dataset loaded"        T = np.zeros((600000,10))        for i in range(600000):            # if i%10000==0:                # print i,T_train_labels[i:i+10]            T[i,T_train_labels[i]]= 1        X_test = np.array(myFile['X_test'])[:10000,:,:,:]                T_labels = np.array(myFile['T_labels'])[:10000]        T_labels = T_labels%10                T_test = np.zeros((10000,10))        for i in range(10000):            T_test[i,T_labels[i]]= 1                # if want_bw:        #     X = X[:,:1,:,:].reshape(70000,1024)        #     X_test = X_test[:,:1,:,:].reshape(70000,1024)        #     return X,T,X_test,T_test,T_train_labels,T_labels        # if want_dense:        #     X = X.reshape(70000,3072)        #     X_test = X_test.reshape(10000,3072)        return X,T,X_test,T_test,T_train_labels,T_labels     

Example 15

From project praxes, under directory praxes/combi, in source file XRDdefaults.py.

Score: 7
def WAVESET1dFILE(mode='r'):    if mode!='r':        mode='r+'#to avoid 'w'    return h5py.File('C:/Users/JohnnyG/Documents/CHESS/CHESSANALYSISARRAYS/waveset1d.h5',mode=mode) 

Example 16

From project hifive-master, under directory test, in source file test_fend.py.

Score: 5
def setUp(self):        self.fends = h5py.File('test/data/test.fends', 'r')        self.bed_fname = 'test/data/test_fend.bed'     

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
Dataset之MNIST:MNIST(手写数字图片识别+ubyte.gz文件)数据集的下载(基于python语言根据爬虫技术自动下载MNIST数据集)
如何用Python实现目录遍历
Python Unit Testing Framework
Chapter 18: Testing and Continuous Integration
使用国内镜像通过pip安装python的一些包 | 学步园
selenium3+python自动化50
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服